[go: up one dir, main page]

Next Issue
Volume 11, May-2
Previous Issue
Volume 11, April-2
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 9 (May-1 2019) – 150 articles

Cover Story (view full-size image): Mosquitoes are vectors of major pathogen agents worldwide. Mapping their distribution can contribute effectively to disease surveillance and control systems. In this study, we used remote sensing data as the input in a weather-driven model of mosquito population dynamics applied to Rift Valley fever vector species in northern Senegal. The model predictions were consistent with field entomological data on mosquito adult abundances. Based on satellite-derived rainfall and temperature data, dynamic maps of three potential Rift Valley fever vector species, Aedes vexans, Culex poicilipes, and Culex tritaeniorhynchus, were then produced at a regional scale on a weekly basis. When direct weather measurements are sparse, these maps can be used to support policymakers in optimizing surveillance and control interventions of Rift Valley fever in Senegal. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 6761 KiB  
Article
Floating Xylene Spill Segmentation from Ultraviolet Images via Target Enhancement
by Shuyue Zhan, Chao Wang, Shuchang Liu, Kaibo Xia, Hui Huang, Xiaorun Li, Caicai Liu and Ren Xu
Remote Sens. 2019, 11(9), 1142; https://doi.org/10.3390/rs11091142 - 13 May 2019
Cited by 6 | Viewed by 3409
Abstract
Automatic colorless floating hazardous and noxious substances (HNS) spill segmentation is an emerging research topic. Xylene is one of the priority HNSs since it poses a high risk of being involved in an HNS incident. This paper presents a novel algorithm for the [...] Read more.
Automatic colorless floating hazardous and noxious substances (HNS) spill segmentation is an emerging research topic. Xylene is one of the priority HNSs since it poses a high risk of being involved in an HNS incident. This paper presents a novel algorithm for the target enhancement of xylene spills and their segmentation in ultraviolet (UV) images. To improve the contrast between targets and backgrounds (waves, sun reflections, and shadows), we developed a global background suppression (GBS) method to remove the irrelevant objects from the background, which is followed by an adaptive target enhancement (ATE) method to enhance the target. Based on the histogram information of the processed image, we designed an automatic algorithm to calculate the optimal number of clusters, which is usually manually determined in traditional cluster segmentation methods. In addition, necessary pre-segmentation processing and post-segmentation processing were adopted in order to improve the performance. Experimental results on our UV image datasets demonstrated that the proposed method can achieve good segmentation results for chemical spills from different backgrounds, especially for images with strong waves, uneven intensities, and low contrast. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of the proposed method.</p>
Full article ">Figure 2
<p>An example of the grid cell analysis (GCA) process.</p>
Full article ">Figure 3
<p>An example of the pre-segmentation processing—(<b>a</b>) original image (4000 × 4000) and (<b>b</b>) processed image (500 × 500).</p>
Full article ">Figure 4
<p>Examples of chemical spill targets in UV images. (<b>a</b>) Image with shadow; (<b>b</b>) image with shadow, wave, and sun reflection; (<b>c</b>) image with shadow, wave, and sun reflection;Since a complex background is not conducive to the precise segmentation of the target, we propose a global background suppression (GBS) algorithm to subtract the redundant information from the background. The main idea of the algorithm is to estimate an adjustable threshold based on the intensity distribution, and then to filter the pixels that are less than the threshold in the images. The details are described as follows.</p>
Full article ">Figure 5
<p>An example of the result of global background suppression (GBS). (<b>a</b>) The original image; (<b>b</b>) the three candidate threshold values; and (<b>c</b>) the global background suppression result.</p>
Full article ">Figure 6
<p>An example of the result of adaptive target enhancement (ATE). (<b>a</b>) The original image; (<b>b</b>) the local entropy image; (<b>c</b>) the gradient image; and (<b>d</b>) the ATE result.</p>
Full article ">Figure 7
<p>A diagram of the peak heights, prominence value, and distance value within the histogram.</p>
Full article ">Figure 8
<p>Examples of segmentation results. (<b>a</b>) The original images; (<b>b</b>) the results after GBS, ATE, and LFTM; (<b>c</b>) the results using only local fuzzy thresholding methodology (LFTM) with 3 centroids; and (<b>d</b>) the results using only LFTM with 4 centroids.</p>
Full article ">Figure 9
<p>Segmentation result of a small target with shadows. (<b>a</b>) The original image (4000 × 4000); (<b>b</b>) the preprocessed image (500 × 500); (<b>c</b>) the result using GBS; (<b>d</b>) the result using ATE; (<b>e</b>) the segmentation result with LFTM; and (<b>f</b>) the final segmentation result.</p>
Full article ">Figure 10
<p>Segmentation result of a target with low contrast and waves. (<b>a</b>) The original image; (<b>b</b>) the preprocessed image; (<b>c</b>) the result using GBS; (<b>d</b>) the result using ATE; (<b>e</b>) the segmentation result with LFTM; and (<b>f</b>) the final segmentation result.</p>
Full article ">Figure 11
<p>Segmentation result of a target uneven illumination. (<b>a</b>) The original image; (<b>b</b>) the preprocessed image; (<b>c</b>) the result using GBS; (<b>d</b>) the result using ATE; (<b>e</b>) the segmentation result with LFTM; and (<b>f</b>) the final segmentation result.</p>
Full article ">Figure 12
<p>Comparison results. (<b>a</b>) Original image; (<b>b</b>) Otsu; (<b>c</b>) Max entropy; (<b>d</b>) LFTM with <span class="html-italic">N<sub>cluster</sub></span> = 3; (<b>e</b>) LFTM with <span class="html-italic">N<sub>cluster</sub></span> = 3; (<b>f</b>) Chan–Vese active contour model (CV model); (<b>g</b>) our method; and (<b>h</b>) ground-truth.</p>
Full article ">Figure 13
<p>Segmentation result of UV images containing interferents. (<b>a</b>) Original UV images and (<b>b</b>) results using our method.</p>
Full article ">
16 pages, 5544 KiB  
Article
A Geometric Barycenter-Based Clutter Suppression Method for Ship Detection in HF Mixed-Mode Surface Wave Radar
by Jiazhi Zhang, Xin Zhang, Weibo Deng, Lei Ye and Qiang Yang
Remote Sens. 2019, 11(9), 1141; https://doi.org/10.3390/rs11091141 - 13 May 2019
Cited by 4 | Viewed by 3671
Abstract
The nonhomogeneous clutter is a major challenge for ship detection in high-frequency mixed-mode surface wave radar. In this paper, a geometric barycenter-based reduced-dimension space-time adaptive processing method is proposed to suppress the clutter. Given the measured dataset, the range correlation of sea clutter [...] Read more.
The nonhomogeneous clutter is a major challenge for ship detection in high-frequency mixed-mode surface wave radar. In this paper, a geometric barycenter-based reduced-dimension space-time adaptive processing method is proposed to suppress the clutter. Given the measured dataset, the range correlation of sea clutter is first investigated. Then, joint domain localized processing is applied to solve the training samples starve scenario in a practical system. The geometric barycenter-based training data selector is presented to select valid training samples and improve the accuracy of the clutter covariance matrix estimation. Finally, the validity of the proposed method is verified using the experimental data and the results show that it outperforms the conventional method in the nonhomogeneous environment of a practical system. Full article
(This article belongs to the Special Issue Remote Sensing for Maritime Safety and Security)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric model of the high-frequency (HF) mixed-mode surface wave radar (MMSWR) system.</p>
Full article ">Figure 2
<p>Uniform linear array.</p>
Full article ">Figure 3
<p>Range correlation result of first-order sea clutter, the Doppler frequency <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mo>−</mo> <mn>0.2602</mn> <mo> </mo> <mi>Hz</mi> </mrow> </semantics></math>, beam point at <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <msup> <mn>0</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>, reference range bin <math display="inline"><semantics> <mrow> <mi>i</mi> <mo>=</mo> <mn>76</mn> </mrow> </semantics></math>, and threshold is 0.8.</p>
Full article ">Figure 4
<p>Correct selection rate against average disturbance power with different numbers of disturbances. <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mi>c</mi> </msub> </mrow> </msub> <mo>=</mo> <mn>0.20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, number of MC are 5000, (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>10</mn> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>12</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Correct selection rate against average disturbance power with different normalized disturbance Doppler frequency. <math display="inline"><semantics> <mrow> <mi>N</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>100</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi mathvariant="sans-serif">Ω</mi> <mo>=</mo> <mn>30</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msubsup> <mi>σ</mi> <mi>c</mi> <mn>2</mn> </msubsup> <mo>=</mo> <mn>15</mn> </mrow> </semantics></math> dB, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mi>c</mi> </msub> </mrow> </msub> <mo>=</mo> <mn>0.20</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mn>0</mn> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>d</mi> <mi>i</mi> <mi>s</mi> <mi>t</mi> <mi>u</mi> <mi>r</mi> <mi>b</mi> <mi>a</mi> <mi>n</mi> <mi>c</mi> <mi>e</mi> </mrow> </msub> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>, number of MC are 5000; (<b>a</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.13</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.30</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>; (<b>b</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.15</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.30</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>; (<b>c</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.15</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.27</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.15</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mrow> <mo>{</mo> <mrow> <msub> <mi>f</mi> <mrow> <msub> <mi>d</mi> <mrow> <mi>o</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> </mrow> </msub> </mrow> <mo>}</mo> </mrow> <mo>∈</mo> <mrow> <mo>{</mo> <mrow> <mn>0.25</mn> <mo>±</mo> <mn>0.01</mn> </mrow> <mo>}</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Experimental results using measured data with simulated target, (<b>a</b>) Range-Doppler map of DBF method, (<b>b</b>) Range-Doppler map of LE-JDL method.</p>
Full article ">Figure 7
<p>Doppler profile result of DBF method, conventional joint domain localized (JDL) method and proposed LE-JDL method with simulated target.</p>
Full article ">Figure 8
<p>Experimental results using measured data with non-cooperative target. (<b>a</b>) Range-Doppler map of DBF method. (<b>b</b>) Range-Doppler map of LE-JDL method.</p>
Full article ">Figure 9
<p>Doppler profile result of DBF method, conventional JDL method and proposed LE-JDL method with non-cooperative target.</p>
Full article ">Figure 10
<p>Experimental results using the measured data with ionospheric clutter, (<b>a</b>) Range-Doppler map of DBF method, (<b>b</b>) Range-Doppler map of LE-JDL method.</p>
Full article ">Figure 11
<p>Doppler profile result of DBF and proposed LE-JDL method with ionospheric clutter.</p>
Full article ">
23 pages, 20181 KiB  
Article
Pedestrian Walking Distance Estimation Based on Smartphone Mode Recognition
by Qu Wang, Langlang Ye, Haiyong Luo, Aidong Men, Fang Zhao and Changhai Ou
Remote Sens. 2019, 11(9), 1140; https://doi.org/10.3390/rs11091140 - 13 May 2019
Cited by 36 | Viewed by 6056
Abstract
Stride length and walking distance estimation are becoming a key aspect of many applications. One of the methods of enhancing the accuracy of pedestrian dead reckoning is to accurately estimate the stride length of pedestrians. Existing stride length estimation (SLE) algorithms present good [...] Read more.
Stride length and walking distance estimation are becoming a key aspect of many applications. One of the methods of enhancing the accuracy of pedestrian dead reckoning is to accurately estimate the stride length of pedestrians. Existing stride length estimation (SLE) algorithms present good performance in the cases of walking at normal speed and the fixed smartphone mode (handheld). The mode represents a specific state of the carried smartphone. The error of existing SLE algorithms increases in complex scenes with many mode changes. Considering that stride length estimation is very sensitive to smartphone modes, this paper focused on combining smartphone mode recognition and stride length estimation to provide an accurate walking distance estimation. We combined multiple classification models to recognize five smartphone modes (calling, handheld, pocket, armband, swing). In addition to using a combination of time-domain and frequency-domain features of smartphone built-in accelerometers and gyroscopes during the stride interval, we constructed higher-order features based on the acknowledged studies (Kim, Scarlett, and Weinberg) to model stride length using the regression model of machine learning. In the offline phase, we trained the corresponding stride length estimation model for each mode. In the online prediction stage, we called the corresponding stride length estimation model according to the smartphone mode of a pedestrian. To train and evaluate the performance of our SLE, a dataset with smartphone mode, actual stride length, and total walking distance were collected. We conducted extensive and elaborate experiments to verify the performance of the proposed algorithm and compare it with the state-of-the-art SLE algorithms. Experimental results demonstrated that the proposed walking distance estimation method achieved significant accuracy improvement over existing individual approaches when a pedestrian was walking in both indoor and outdoor complex environments with multiple mode changes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The system architecture of the proposed walking distance estimation method.</p>
Full article ">Figure 2
<p>The system of training data collection and performance evaluation.</p>
Full article ">Figure 3
<p>The distribution of the dataset.</p>
Full article ">Figure 4
<p>Histograms of real stride length.</p>
Full article ">Figure 5
<p>The JSON (JavaScript Object Notation) encapsulation of sensor data in each stride.</p>
Full article ">Figure 6
<p>The signal before and after using the Butterworth filter.</p>
Full article ">Figure 7
<p>Walk detection with the joint decision of the gyroscope, accelerometer, and magnetometer.</p>
Full article ">Figure 8
<p>Different smartphone modes: (<b>a</b>) handheld; (<b>b</b>) calling; (<b>c</b>) trouser pocket; (<b>d</b>) swinging-hand; (<b>e</b>) arm-hand.</p>
Full article ">Figure 9
<p>The triaxial acceleration of different modes.</p>
Full article ">Figure 10
<p>The triaxial gyroscope of different modes.</p>
Full article ">Figure 11
<p>Stacking-based ensemble.</p>
Full article ">Figure 12
<p>The comparison of recognition accuracy using stacking ensemble and single model.</p>
Full article ">Figure 13
<p>Stacking regressions model.</p>
Full article ">Figure 14
<p>Comparison of stride length estimation using the stacking model and single regression models.</p>
Full article ">Figure 15
<p>Recognition performance for each smartphone mode using a stacking ensemble classifier.</p>
Full article ">Figure 16
<p>Comparison of stride length estimation error.</p>
Full article ">Figure 17
<p>Box plot of stride length error.</p>
Full article ">Figure 18
<p>Walking trajectory description. The volunteer was asked to travel the highlighted trajectory with multiple smartphone mode changes.</p>
Full article ">Figure 19
<p>Outdoor stadium.</p>
Full article ">Figure 20
<p>Road with significant inclination.</p>
Full article ">
20 pages, 4219 KiB  
Article
Validation of Preliminary Results of Thermal Tropopause Derived from FY-3C GNOS Data
by Ziyan Liu, Yueqiang Sun, Weihua Bai, Junming Xia, Guangyuan Tan, Cheng Cheng, Qifei Du, Xianyi Wang, Danyang Zhao, Yusen Tian, Xiangguang Meng, Congliang Liu, Yuerong Cai and Dongwei Wang
Remote Sens. 2019, 11(9), 1139; https://doi.org/10.3390/rs11091139 - 13 May 2019
Cited by 8 | Viewed by 3588
Abstract
The state-of-art global navigation satellite system (GNSS) occultation sounder (GNOS) onboard the FengYun 3 series C satellite (FY-3C) has been in operation for more than five years. The accumulation of FY-3C GNOS atmospheric data makes it ready to be used in atmosphere and [...] Read more.
The state-of-art global navigation satellite system (GNSS) occultation sounder (GNOS) onboard the FengYun 3 series C satellite (FY-3C) has been in operation for more than five years. The accumulation of FY-3C GNOS atmospheric data makes it ready to be used in atmosphere and climate research fields. This work first introduces FY-3C GNOS into tropopause research and gives the error evaluation results of long-term FY-3C atmosphere profiles. We compare FY-3C results with Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) and radiosonde results and also present the FY-3C global seasonal tropopause patterns. The mean temperature deviation between FY-3C GNOS temperature profiles and COSMIC temperature profiles from January 2014 to December 2017 is globally less than 0.2 K, and the bias of tropopause height (TPH) and tropopause temperature (TPT) annual cycle derived from both collocated pairs are about 80–100 m and 1–2 K, respectively. Also, the correlation coefficients between FY-3C GNOS tropopause parameters and each radiosonde counterpart are generally larger than 0.9 and the corresponding regression coefficients are close to 1. Multiple climate phenomena shown in seasonal patterns coincide with results of other relevant studies. Our results demonstrate the long-term stability of FY-3C GNOS atmosphere profiles and utility of FY-3C GNOS data in the climate research field. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Latitudinal (<b>a</b>) and temporal (<b>b</b>) distribution of 4 years of FY-3C (blue dot) and COSMIC (red dot) temperature profiles.</p>
Full article ">Figure 2
<p>Radiosonde stations distribution, red dots representing GRUAN stations and blue dots representing IGRA stations.</p>
Full article ">Figure 3
<p>Mean temperature deviation between FengYun 3 series C satellite global navigation satellite system occultation sounder (FY-3C GNOS) and Constellation Observing System for Meteorology, Ionosphere and Climate (COSMIC) in different latitude bands, and the number noted at the top-left corner of each panel is the number of collocated pairs.</p>
Full article ">Figure 4
<p>Root mean square error between FY-3C GNOS and COSMIC in different latitude bands.</p>
Full article ">Figure 5
<p>Panel (<b>a</b>) indicates the annual cycle mean tropopause height (TPH) retrieved from four years of collocated FY-3C and COSMIC data. Dashed lines are for COSMIC results and solid lines are for FY-3C results. Panel (<b>b</b>) and panel (<b>c</b>) show the mean and absolute deviation.</p>
Full article ">Figure 6
<p>Annual cycle mean tropopause temperature (TPT) (<b>a</b>), TPT mean deviation (<b>b</b>) and TPT absolute deviation (<b>c</b>), which is similar to <a href="#remotesensing-11-01139-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 7
<p>TPH comparison results of FY-3C and radiosonde data of nine stations. The x-axis (FY-3C TPH results) and y-axis (radiosonde TPH results) are symmetric and the correlation coefficient r and linear regression coefficient l are located at the top-left corner of each panel.</p>
Full article ">Figure 8
<p>TPT comparison results, which are similar to <a href="#remotesensing-11-01139-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 9
<p>Seasonal patterns of TPT and TPH derived from January 2014-December 2017 FY-3C data. This is for spring (March, April, May). The (<b>a</b>) panel is global TPH pattern and the (<b>b</b>) panel is for the standard deviation of global TPH. Similarly, the (<b>c</b>) panel and the (<b>d</b>) panel are for global TPT pattern and TPT standard deviation, respectively.</p>
Full article ">Figure 10
<p>Seasonal patterns (<b>a</b>–<b>d</b>) of TPT and TPH for summer (June, July, August). White box points out the region where the tropopause is extremely high.</p>
Full article ">Figure 11
<p>Seasonal pattern of tropical TPH for summer (June, July, August). White box is the same with that in <a href="#remotesensing-11-01139-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 12
<p>Seasonal patterns (<b>a</b>–<b>d</b>) of TPT and TPH for autumn (September, October, November).</p>
Full article ">Figure 13
<p>Seasonal patterns (<b>a</b>–<b>d</b>) of TPT and TPH for winter (December, January, February). Two white circles identify two areas where the tropopause is obviously low.</p>
Full article ">
8 pages, 232 KiB  
Editorial
Advances in the Remote Sensing of Terrestrial Evaporation
by Matthew F. McCabe, Diego G. Miralles, Thomas R.H. Holmes and Joshua B. Fisher
Remote Sens. 2019, 11(9), 1138; https://doi.org/10.3390/rs11091138 - 13 May 2019
Cited by 23 | Viewed by 5519
Abstract
Characterizing the terrestrial carbon, water, and energy cycles depends strongly on a capacity to accurately reproduce the spatial and temporal dynamics of land surface evaporation. For this, and many other reasons, monitoring terrestrial evaporation across multiple space and time scales has been an [...] Read more.
Characterizing the terrestrial carbon, water, and energy cycles depends strongly on a capacity to accurately reproduce the spatial and temporal dynamics of land surface evaporation. For this, and many other reasons, monitoring terrestrial evaporation across multiple space and time scales has been an area of focused research for a number of decades. Much of this activity has been supported by developments in satellite remote sensing, which have been leveraged to deliver new process insights, model development and methodological improvements. In this Special Issue, published contributions explored a range of research topics directed towards the enhanced estimation of terrestrial evaporation. Here we summarize these cutting-edge efforts and provide an overview of some of the state-of-the-art approaches for retrieving this key variable. Some perspectives on outstanding challenges, issues, and opportunities are also presented. Full article
(This article belongs to the Special Issue Advances in the Remote Sensing of Terrestrial Evaporation)
17 pages, 6253 KiB  
Article
Same Viewpoint Different Perspectives—A Comparison of Expert Ratings with a TLS Derived Forest Stand Structural Complexity Index
by Julian Frey, Bettina Joa, Ulrich Schraml and Barbara Koch
Remote Sens. 2019, 11(9), 1137; https://doi.org/10.3390/rs11091137 - 13 May 2019
Cited by 15 | Viewed by 5168
Abstract
Forests are one of the most important terrestrial ecosystems for the protection of biodiversity, but at the same time they are under heavy production pressures. In many cases, management optimized for timber production leads to a simplification of forest structures, which is associated [...] Read more.
Forests are one of the most important terrestrial ecosystems for the protection of biodiversity, but at the same time they are under heavy production pressures. In many cases, management optimized for timber production leads to a simplification of forest structures, which is associated with species loss. In recent decades, the concept of retention forestry has been implemented in many parts of the world to mitigate this loss, by increasing structure in managed stands. Although this concept is widely adapted, our understanding what forest structure is and how to reliably measure and quantify it is still lacking. Thus, more insights into the assessment of biodiversity-relevant structures are needed, when aiming to implement retention practices in forest management to reach ambitious conservation goals. In this study we compare expert ratings on forest structural richness with a modern light detection and ranging (LiDAR) -based index, based on 52 research sites, where terrestrial laser scanning (TLS) data and 360° photos have been taken. Using an online survey (n = 444) with interactive 360° panoramic image viewers, we sought to investigate expert opinions on forest structure and learn to what degree measures of structure from terrestrial laser scans mirror experts’ estimates. We found that the experts’ ratings have large standard deviance and therefore little agreement. Nevertheless, when averaging the large number of participants, they distinguish stands according to their structural richness significantly. The stand structural complexity index (SSCI) was computed for each site from the LiDAR scan data, and this was shown to reflect some of the variation of expert ratings (p = 0.02). Together with covariates describing participants’ personal background, image properties and terrain variables, we reached a conditional R2 of 0.44 using a linear mixed effect model. The education of the participants had no influence on their ratings, but practical experience showed a clear effect. Because the SSCI and expert opinion align to a significant degree, we conclude that the SSCI is a valuable tool to support forest managers in the selection of retention patches. Full article
(This article belongs to the Special Issue Virtual Forest)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Detailed workflow of the whole study. Details about the processing steps can be found in chapters 2.2–2.6. Green boxes indicate data acquisition, orange boxes data preparation and blue boxes data analysis steps. DTM stands for digital terrain model, TGI for triangular greenness index and brightness standard deviation (SD) for the area weighted standard deviance of the image brightness.</p>
Full article ">Figure 2
<p>Map of the research area within Germany (green in top-left corner), and the location of plots where photos and scans have been taken (green squares).</p>
Full article ">Figure 3
<p>(<b>A</b>): Garmin VIRB 360 digital camera; (<b>B</b>) Faro Focus 120 terrestrial laser scanner; (<b>C</b>): Visualization of a point cloud; (<b>D</b>): overlay of the 360°-image from the camera with the distance image from the scanner—blue colors indicate closer red colors more distant points.</p>
Full article ">Figure 4
<p>Examples of different vertical scanlines from a terrestrial laser scanning (TLS) data for stand structural complexity index (SSCI) calculation. Colored points indicate the returns to the scanner; gray area shows the constructed polygon. The upper figure is an example for low, the central figure for medium and the lower figure for high fractional dimension (FRAC) values.</p>
Full article ">Figure 5
<p>360°-images of the calibration phase of the questionnaire, which have been projected to the spherical image viewer. Here depicted in planar Mercator projection for better visibility. The high, medium and low structural richness has been derived according to the SSCI rank order.</p>
Full article ">Figure 6
<p>Effect plots of the statistically significant predictors. Grey areas and whiskers indicate 95% confidence intervals. The scale from the expert rating goes from 1 very low up to 5 very high structural richness. Upper left: effect of practical experience in the assignment of habitat trees, top right: effect of the 95% height quantile from the TLS data, lower left: effect of the stand structural complexity index (SSCI), lower right: effect of the triangular greenness index (TGI).</p>
Full article ">Figure A1
<p>Images with the lowest (<b>A</b>) and the highest (<b>B</b>) structural richness according to expert ratings.</p>
Full article ">
19 pages, 2832 KiB  
Article
Spatial Prior Fuzziness Pool-Based Interactive Classification of Hyperspectral Images
by Muhammad Ahmad, Asad Khan, Adil Mehmood Khan, Manuel Mazzara, Salvatore Distefano, Ahmed Sohaib and Omar Nibouche
Remote Sens. 2019, 11(9), 1136; https://doi.org/10.3390/rs11091136 - 13 May 2019
Cited by 63 | Viewed by 5190
Abstract
Acquisition of labeled data for supervised Hyperspectral Image (HSI) classification is expensive in terms of both time and costs. Moreover, manual selection and labeling are often subjective and tend to induce redundancy into the classifier. Active learning (AL) can be a suitable approach [...] Read more.
Acquisition of labeled data for supervised Hyperspectral Image (HSI) classification is expensive in terms of both time and costs. Moreover, manual selection and labeling are often subjective and tend to induce redundancy into the classifier. Active learning (AL) can be a suitable approach for HSI classification as it integrates data acquisition to the classifier design by ranking the unlabeled data to provide advice for the next query that has the highest training utility. However, multiclass AL techniques tend to include redundant samples into the classifier to some extent. This paper addresses such a problem by introducing an AL pipeline which preserves the most representative and spatially heterogeneous samples. The adopted strategy for sample selection utilizes fuzziness to assess the mapping between actual output and the approximated a-posteriori probabilities, computed by a marginal probability distribution based on discriminative random fields. The samples selected in each iteration are then provided to the spectral angle mapper-based objective function to reduce the inter-class redundancy. Experiments on five HSI benchmark datasets confirmed that the proposed Fuzziness and Spectral Angle Mapper (FSAM)-AL pipeline presents competitive results compared to the state-of-the-art sample selection techniques, leading to lower computational requirements. Full article
(This article belongs to the Special Issue Image Optimization in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Pavia University image; (<b>b</b>) True ground truths differentiate nine different classes; (<b>c</b>) SVM trained with 1% randomly selected training samples; (<b>d</b>) SVM trained with 10% randomly selected training samples; (<b>e</b>) KNN trained with 1% randomly selected training samples; (<b>f</b>) KNN trained with 10% randomly selected training samples; (<b>g</b>) Logistic boost (LB) trained with 1% randomly selected training samples; (<b>h</b>) LB trained with 10% randomly selected training samples.</p>
Full article ">Figure 2
<p>Proposed FSAM-AL Pipeline in which red labeled boxes represent where we contribute.</p>
Full article ">Figure 3
<p>Overall accuracy with different number of training samples (%) selected in each iteration from different datasets. It is perceived from the above figure that by including the samples back to the training set, the classification results are significantly improved for all the classifiers. Moreover, it can be seen that SVM and ELM classifiers are more robust. For examples, with 2% actively selected samples in ELM classifier case, only 2% difference in the classification with a different number of samples can be observed, however, for the KNN and SVM classifiers, the difference is quite high.</p>
Full article ">Figure 4
<p>Kappa (<math display="inline"><semantics> <mi>κ</mi> </semantics></math>) accuracy with different number of training samples (%) selected in each iteration from Salinas-A, Salinas, Kennedy Space Center, Pavia University, and Pavia Center datasets respectively. It is perceived from the above figure that by including the samples back to the training set, the classification results in terms of kappa <math display="inline"><semantics> <mi>κ</mi> </semantics></math> are significantly improved for all the classifiers. Moreover, it can be seen that SVM and ELM classifiers are more robust then ensemble and KNN classifiers. For examples, with 2% actively selected samples in ELM classifier case, only 2% difference in the classification with a different number of samples can be observed, however, for the KNN and SVM classifiers, the difference is quite high. Similar observations can be made for ensemble learning models.</p>
Full article ">Figure 5
<p>Salinas-A: (<b>a</b>): Ground Band, (<b>b</b>): True Ground Truths, (<b>c</b>): Training Ground Truths, (<b>d</b>): Test Ground Truths, and ground truths predicted by (<b>e</b>): SVM, (<b>f</b>): KNN, (<b>g</b>): GB, (<b>h</b>): LB, and (<b>i</b>): ELM classifier with 2% of selected training samples.</p>
Full article ">Figure 6
<p>Salinas: (<b>a</b>) Ground Band, (<b>b</b>): True Ground Truths, (<b>c</b>): Training Ground Truths, (<b>d</b>): Test Ground Truths, and ground truths predicted by (<b>e</b>): SVM, (<b>f</b>): KNN, (<b>g</b>): GB, (<b>h</b>): LB, and (<b>i</b>): ELM classifier with 2% of selected training samples.</p>
Full article ">Figure 7
<p>Kennedy Space Center: (<b>a</b>) Ground Band, (<b>b</b>): True Ground Truths, (<b>c</b>): Training Ground Truths, (<b>d</b>): Test Ground Truths, and ground truths predicted by (<b>e</b>): SVM, (<b>f</b>): KNN, (<b>g</b>): GB, (<b>h</b>): LB, and (<b>i</b>): ELM classifiers with 2% of selected training samples.</p>
Full article ">Figure 8
<p>Pavia University: (<b>a</b>) Ground Band, (<b>b</b>): True Ground Truths, (<b>c</b>): Training Ground Truths, (<b>d</b>): Test Ground Truths, and ground truths predicted by (<b>e</b>): SVM, (<b>f</b>): KNN, (<b>g</b>): GB, (<b>h</b>): LB, and (<b>i</b>): ELM classifier with 2% of selected training samples.</p>
Full article ">Figure 9
<p>Pavia Center: (<b>a</b>) Ground Band, (<b>b</b>): True Ground Truths, (<b>c</b>): Training Ground Truths, (<b>d</b>): Test Ground Truths, and ground truths predicted by (<b>e</b>): SVM, (<b>f</b>): KNN, (<b>g</b>): GB, (<b>h</b>): LB, and (<b>i</b>): ELM classifier with 2% of selected training samples.</p>
Full article ">
24 pages, 18396 KiB  
Article
Studying the Influence of Nitrogen Deposition, Precipitation, Temperature, and Sunshine in Remotely Sensed Gross Primary Production Response in Switzerland
by Marta Gómez Giménez, Rogier de Jong, Armin Keller, Beat Rihm and Michael E. Schaepman
Remote Sens. 2019, 11(9), 1135; https://doi.org/10.3390/rs11091135 - 12 May 2019
Cited by 5 | Viewed by 4956
Abstract
Climate, soil type, and management practices have been reported as primary limiting factors of gross primary production (GPP). However, the extent to which these factors predict GPP response varies according to scales and land cover classes. Nitrogen (N) deposition has been highlighted as [...] Read more.
Climate, soil type, and management practices have been reported as primary limiting factors of gross primary production (GPP). However, the extent to which these factors predict GPP response varies according to scales and land cover classes. Nitrogen (N) deposition has been highlighted as an important driver of primary production in N-limited ecosystems that also have an impact on biodiversity in alpine grasslands. However, the effect of N deposition on GPP response in alpine grasslands hasn’t been studied much at a large scale. These remote areas are characterized by complex topography and extensive management practices with high species richness. Remotely sensed GPP products, weather datasets, and available N deposition maps bring along the opportunity of analyzing how those factors predict GPP in alpine grasslands and compare these results with those obtained in other land cover classes with intensive and mixed management practices. This study aims at (i) analyzing the impact of N deposition and climatic variables (precipitation, sunshine, and temperature) on carbon (C) fixation response in alpine grasslands and (ii) comparing the results obtained in alpine grasslands with those from other land cover classes with different management practices. We stratified the analysis using three land cover classes: Grasslands, croplands, and croplands/natural vegetation mosaic and built multiple linear regression models. In addition, we analyzed the soil characteristics, such as aptitude for croplands, stone content, and water and nutrient storage capacity for each class to interpret the results. In alpine grasslands, explanatory variables explained up to 80% of the GPP response. However, the explanatory performance of the covariates decreased to maximums of 47% in croplands and 19% in croplands/natural vegetation mosaic. Further information will improve our understanding of how N deposition affects GPP response in ecosystems with high and mixed intensity of use management practices, and high species richness. Nevertheless, this study helps to characterize large patterns of GPP response in regions affected by local climatic conditions and different land management patterns. Finally, we highlight the importance of including N deposition in C budget models, while accounting for N dynamics. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area. Land cover classes distributed across (aggregated, e.g., Alps) biogeographic regions. Coordinate system: WGS84.</p>
Full article ">Figure 2
<p>Correlation matrix between the dependent and independent variables per year and land cover class (<span class="html-italic">p</span> &lt; 0.01 mostly, see <a href="#app1-remotesensing-11-01135" class="html-app">Appendix A</a>, <a href="#remotesensing-11-01135-t0A1" class="html-table">Table A1</a>, <a href="#remotesensing-11-01135-t0A2" class="html-table">Table A2</a> and <a href="#remotesensing-11-01135-t0A3" class="html-table">Table A3</a>). G: GPP, N: N deposition, ln N: natural logarithm of N deposition, P: Precipitation, S: Sunshine, and T: Temperature.</p>
Full article ">Figure A1
<p>Spatial variability and statistics of Nitrogen deposition maps per year. Units: kg N/ ha year.</p>
Full article ">Figure A2
<p>Spatial variability of yearly-accumulated precipitation: (a–c) croplands (455–3490 mm/year), croplands/natural vegetation mosaic (469–3472 mm/year), and grasslands (534–3825 mm/year), respectively, for the year 2000; (d–f) croplands (530–2919 mm/year), croplands/natural vegetation mosaic (530–2676 mm/year), and grasslands (528–4008 mm/year), respectively, for the year 2007; and (g–i) croplands (382–2860 mm/year), croplands/natural vegetation mosaic (386–2793 mm/year), and grasslands (447–3245 mm/year), respectively, for the year 2010.</p>
Full article ">Figure A3
<p>Spatial variability of yearly relative sunshine duration: (a–c) croplands (36–57%), croplands/natural vegetation mosaic (36–57%), and grasslands (36–57%), respectively, for the year 2000; (d–f) croplands (40–61%), croplands/natural vegetation mosaic (40–62%), and grasslands (41–63%), respectively, for the year 2007; and (g–i) croplands (35–56%), croplands/natural vegetation mosaic (34–55%), and grasslands (35–55%), respectively, for the year 2010.</p>
Full article ">Figure A4
<p>Spatial variability of yearly mean temperature: (a–c) croplands (–4–12°C), croplands/natural vegetation mosaic (-––13°C), and grasslands (–9–14°C), respectively, for the year 2000; (d–f) croplands (–4–13°C), croplands/natural vegetation mosaic (–4–13°C), and grasslands (–9–15°C), respectively, for the year 2007; and (g–i) croplands (–6–12°C), croplands/natural vegetation mosaic (–5–12°C), and grasslands (–10–13°C), respectively, for the year 2010.</p>
Full article ">Figure A5
<p>Correlation matrix using the original values of N deposition for the class grasslands and the year 2000.</p>
Full article ">Figure A6
<p>Scatterplot of the standardized residuals and the standardized predicted values using the original values of the class grasslands for the year 2000.</p>
Full article ">Figure A7
<p>Correlation matrix using the natural logarithm of N deposition for the class grasslands and the year 2000.</p>
Full article ">Figure A8
<p>Scatterplot of the standardised residuals and the standardised predicted values using the natural logarithm of N deposition for the class grasslands and the year 2000.</p>
Full article ">
26 pages, 7429 KiB  
Article
Hybrid Grasshopper Optimization Algorithm and Differential Evolution for Multilevel Satellite Image Segmentation
by Heming Jia, Chunbo Lang, Diego Oliva, Wenlong Song and Xiaoxu Peng
Remote Sens. 2019, 11(9), 1134; https://doi.org/10.3390/rs11091134 - 12 May 2019
Cited by 55 | Viewed by 7386
Abstract
An efficient satellite image segmentation method based on a hybrid grasshopper optimization algorithm (GOA) and minimum cross entropy (MCE) is proposed in this paper. The proposal is known as GOA–jDE, and it merges GOA with self-adaptive differential evolution (jDE) to improve the search [...] Read more.
An efficient satellite image segmentation method based on a hybrid grasshopper optimization algorithm (GOA) and minimum cross entropy (MCE) is proposed in this paper. The proposal is known as GOA–jDE, and it merges GOA with self-adaptive differential evolution (jDE) to improve the search efficiency, preserving the population diversity especially in the later iterations. A series of experiments is conducted on various satellite images for evaluating the performance of the algorithm. Both low and high levels of the segmentation are taken into account, increasing the dimensionality of the problem. The proposed approach is compared with the standard color image thresholding methods, as well as the advanced satellite image thresholding techniques based on different criteria. Friedman test and Wilcoxon’s rank sum test are performed to assess the significant difference between the algorithms. The superiority of the proposed method is illustrated from different aspects, such as average fitness function value, peak signal to noise ratio (PSNR), structural similarity index (SSIM), feature similarity index (FSIM), standard deviation (STD), convergence performance, and computation time. Furthermore, natural images from the Berkeley segmentation dataset are also used to validate the strong robustness of the proposed method. Full article
(This article belongs to the Special Issue Image Optimization in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Variation of function s at different values of <span class="html-italic">f</span> and <span class="html-italic">l</span>. (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>l</mi> <mo>=</mo> <mn>1.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>f</mi> </semantics></math> in <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mn>0</mn> <mo>,</mo> <mn>1</mn> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>f</mi> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mi>l</mi> </semantics></math> in <math display="inline"><semantics> <mrow> <mrow> <mo>[</mo> <mrow> <mn>1</mn> <mo>,</mo> <mn>2</mn> </mrow> <mo>]</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>a</b>) The average value of crossover rate (<b><span class="html-italic">CR</span></b>) and (<b>b</b>) scaling factor (<b><span class="html-italic">F</span></b>) for 30 runs using jDE algorithm.</p>
Full article ">Figure 3
<p>Framework of the GOA–jDE-based method.</p>
Full article ">Figure 4
<p>Original test images named ‘Image1’, ‘Image2’, ‘Image3’, ‘Image4’, ‘Image5’, ‘Image6’, ‘Image7’, and ‘Image8’, and the corresponding histograms for each of the color channels (red, green, and blue). (<b>a</b>) 1800 × 1200, (<b>b</b>) 2796 × 1864, (<b>c</b>) 5339 × 3559, (<b>d</b>) 2712 × 1808, (<b>e</b>) 4310 × 4019, (<b>f</b>) 2856 × 1904, (<b>g</b>) 3467 × 2311, (<b>h</b>) 1512 × 1008; (<b>left figure</b>) Original test images, (<b>right figure</b>) Histogram of each frame.</p>
Full article ">Figure 5
<p>The segmented results and local zoom maps of “Image4” at 4, 8, and 12 threshold levels. (<b>a</b>) K = 4, (<b>b</b>) K = 8, (<b>c</b>) K = 12.</p>
Full article ">Figure 6
<p>The segmented results and local zoom maps of “Image7” at 4, 8, and 12 threshold levels. (<b>a</b>) K = 4, (<b>b</b>) K = 8, (<b>c</b>) K = 12.</p>
Full article ">Figure 7
<p>The convergence curves for fitness function using MCE method at 12 threshold levels. (<b>a</b>) Image2, (<b>b</b>) Image4, (<b>c</b>) Image6, (<b>d</b>) Image8.</p>
Full article ">Figure 8
<p>Comparison of PSNR values for different algorithms using MCE method at 4, 6, 8, 10, and 12 levels.</p>
Full article ">Figure 9
<p>Comparison of SSIM values over all images using MCE method at 4, 6, 8, 10, and 12 levels.</p>
Full article ">Figure 10
<p>Comparison of FSIM values over all images using MCE method at 4, 6, 8, 10, and 12 levels.</p>
Full article ">Figure 11
<p>The boxplot for fitness function using MCE method at 12 threshold levels. (<b>a</b>) Image5, (<b>b</b>) Image6, (<b>c</b>) Image7, (<b>d</b>) Image8.</p>
Full article ">Figure 12
<p>Two natural images from the Berkeley segmentation dataset, which are named ‘Elephant’, and ‘Plane’, respectively, and the corresponding histograms for each of the color channels (red, green, and blue). (<b>a</b>) Elephant (481 × 321), (<b>b</b>) Plane (481 × 321).</p>
Full article ">
16 pages, 2765 KiB  
Article
Development and Validation of a Photo-Based Measurement System to Calculate the Debarking Percentages of Processed Logs
by Joachim B. Heppelmann, Eric R. Labelle, Thomas Seifert, Stefan Seifert and Stefan Wittkopf
Remote Sens. 2019, 11(9), 1133; https://doi.org/10.3390/rs11091133 - 12 May 2019
Cited by 2 | Viewed by 3737
Abstract
Within a research project investigating the applicability and performance of modified harvesting heads used during the debarking of coniferous tree species, the actual debarking percentage of processed logs needed to be evaluated. Therefore, a computer-based photo-optical measurement system (Stemsurf) designed to assess the [...] Read more.
Within a research project investigating the applicability and performance of modified harvesting heads used during the debarking of coniferous tree species, the actual debarking percentage of processed logs needed to be evaluated. Therefore, a computer-based photo-optical measurement system (Stemsurf) designed to assess the debarking percentage recorded in the field was developed, tested under laboratory conditions, and applied in live field operations. In total, 1720 processed logs of coniferous species from modified harvesting heads were recorded and analyzed within Stemsurf. With a single log image as the input, the overall debarking percentage was calculated by further estimating the un-displayed part of the log surface by defining polygons representing the differently debarked areas of the log surface. To assess the precision and bias of the developed measurement system, 480 images were captured under laboratory conditions on an artificial log with defined surface polygons. Within the laboratory test, the standard deviation of average debarking percentages remained within a 4% variation. A positive bias of 6.7% was caused by distortion and perspective effects. This resulted in an average underestimation of 1.1% for the summer debarking percentages gathered from field operations. The software generally performed as anticipated through field and lab testing and offered a suitable alternative of assessing stem debarking percentage, a task that should increase in importance as more operations are targeting debarked products. Full article
(This article belongs to the Special Issue Advances in Active Remote Sensing of Forests)
Show Figures

Figure 1

Figure 1
<p>Schematic chart of the operating principle and working steps for the Stemsurf software.</p>
Full article ">Figure 2
<p>Cross-sectional view of the stem for the calculation of the bark surface area units with V being the camera view direction, r the radius at the y-stem axis cross section, c the length of the stem surface, α the angle in radiant, and y the perpendicular distance from the stem axis in the image.</p>
Full article ">Figure 3
<p>Schematic illustration showing the laboratory setup for a test series performed on square geometry. The standardized log is turned 90 degrees to show an example of square geometry.</p>
Full article ">Figure 4
<p>Modifications performed on three different harvesting head prototypes. (<b>A</b>) General overview of modifiable parts of conventional harvesting heads. (<b>B</b>) Tested S1 modifications (inner and outer feed rollers, measuring wheel). (<b>C</b>) Tested S2 modifications (feed rollers). (<b>D</b>) Tested S3 modifications (inner and outer feed rollers, measuring wheel, upper delimbing knives, top knife).</p>
Full article ">Figure 5
<p>Schematic illustration showing the parallel positioning of processed logs placed on a forest road ready for image acquisition.</p>
Full article ">Figure 6
<p>Histograms of measured polygon shares of (<b>A</b>) wood surface of the rectangular geometry test series, (<b>B</b>) bark surface of the rectangular geometry test series, (<b>C</b>) wood surface of the round geometry test series, and (<b>D</b>) bark surface of the round geometry test series. The red line identifies the actual share of 75% wood and 25% bark surface.</p>
Full article ">Figure 7
<p>Calculated standard deviations for different sample sizes: (<b>A</b>) wood surface of rectangular geometry test series, (<b>B</b>) wood surface of round geometry test series, (<b>C</b>) wood surface total, (<b>D</b>) bark surface of rectangular geometry test series, (<b>E)</b> bark surface of round geometry test series, (<b>F</b>) bark surface total.</p>
Full article ">Figure 8
<p>Schematic of a 2D projection of a log via equal pixels and the expected described log surface with (<b>A</b>) longer areas towards the outside and (<b>B</b>) wider areas towards the extremities of the pictured log. The arrows indicate the direction of exceeding surface areas defined by a single pixel.</p>
Full article ">
18 pages, 5704 KiB  
Article
A Partition Modeling for Anthropogenic Heat Flux Mapping in China
by Shasha Wang, Deyong Hu, Shanshan Chen and Chen Yu
Remote Sens. 2019, 11(9), 1132; https://doi.org/10.3390/rs11091132 - 12 May 2019
Cited by 30 | Viewed by 4560
Abstract
Anthropogenic heat (AH) generated by human activities has a major impact on urban and regional climate. Accurately estimating anthropogenic heat is of great significance for studies on urban thermal environment and climate change. In this study, a gridded anthropogenic heat flux (AHF) estimation [...] Read more.
Anthropogenic heat (AH) generated by human activities has a major impact on urban and regional climate. Accurately estimating anthropogenic heat is of great significance for studies on urban thermal environment and climate change. In this study, a gridded anthropogenic heat flux (AHF) estimation scheme was constructed based on socio-economic data, energy-consumption data, and multi-source remote sensing data using a partition modeling method, which takes into account the regional characteristics of AH emission caused by the differences in regional development levels. The refined AHF mapping in China was realized with a high resolution of 500 m. The results show that the spatial distribution of AHF has obvious regional characteristics in China. Compared with the AHF in provinces, the AHF in Shanghai is the highest which reaches 12.56 W·m−2, followed by Tianjin, Beijing, and Jiangsu. The AHF values are 5.92 W·m−2, 3.35 W·m−2, and 3.10 W·m−2, respectively. As can be seen from the mapping results of refined AHF, the high-value AHF aggregation areas are mainly distributed in north China, east China, and south China. The high-value AHF in urban areas is concentrated in 50–200 W·m−2, and maximum AHF in Shenzhen urban center reaches 267 W·m−2. Further, compared with other high resolution AHF products, it can be found that the AHF results in this study have higher spatial heterogeneity, which can better characterize the emission characteristics of AHF in the region. The spatial pattern of the AHF estimation results correspond to the distribution of building density, population, and industry zone. The high-value AHF areas are mainly distributed in airports, railway stations, industry areas, and commercial centers. It can thus be seen that the AHF estimation models constructed by the partition modeling method can well realize the estimation of large-scale AHF and the results can effectively express the detailed spatial distribution of AHF in local areas. These results can provide technical ideas and data support for studies on surface energy balance and urban climate change. Full article
(This article belongs to the Section Urban Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area division. northern coastal region (NCR); eastern coastal region (ECR); southern coastal region (SCR); middle Yangtze River region (MYAR); southwest region (SWR); northeast region (NER); middle Yellow River region (MYER); northwest region (NWR).</p>
Full article ">Figure 2
<p>Flowchart of the gridded anthropogenic heat flux (AHF) estimation scheme.</p>
Full article ">Figure 3
<p>The anthropogenic heat emission (AHE) and AHF results in 2016. (<b>a</b>) AHE of different heat sources in provinces. (<b>b</b>) AHF of the provinces.</p>
Full article ">Figure 4
<p>Linear regression relationship between AHF and vegetation adjusted nighttime light urban index (VANUI or NTLnor) in sub-regions. (<b>a</b>) ECR; (<b>b</b>) MYER; (<b>c</b>) MYAR; (<b>d</b>) SCR; (<b>e</b>) SWR; (<b>f</b>) NER; (<b>g</b>) NCR; (<b>h</b>) NWR.</p>
Full article ">Figure 5
<p>The AHF estimation results in China in the year 2016. (<b>a</b>) Beijing–Tianjin–Hebei region; (<b>b</b>) Yangtze River delta region; (<b>c</b>) Pearl River delta region.</p>
Full article ">Figure 6
<p>The AHF estimation results and high-value area distribution validation of cities. (<b>a</b>) Beijing; (<b>b</b>) Shanghai; (<b>c</b>) Guangzhou; (<b>d</b>–<b>f</b>) high-resolution images of corresponding cities. Note: (<b>a</b>,<b>d</b>): 1. Beijing capital international airport; 2. Beijing railway station; 3. Beijing south railway station;4. Beijing Nanyuan airport; 5. Beijing west railway station; 6. Beijing east railway station; (<b>b</b>,<b>e</b>): 1. Shanghai Pudong international airport; 2. Shanghai Hongqiao airport; 3. Shanghai south railway station; 4. Shanghai Gaodong helipad; 5.Oriental Pearl Tower; 6. Songjiang south station; 7. Shanghai Disney; (<b>c</b>,<b>f</b>): 1. Guangzhou Baiyun airport; 2. Guangzhou east railway station; 3. Guangzhou station; 4. Tangxi railway station; 5. Canton Tower; 6. Industry zone.</p>
Full article ">Figure 7
<p>The comparison of the AHF results of the three models (<b>a</b>) The results of this study resampled to 1 km resolution. (<b>b</b>) The results of the large scale urban consumption of the energy (LUCY) model. (<b>c</b>) The results of this study resampled to 2.5 arc-min resolution. (<b>d</b>) The results of Flanner et al.</p>
Full article ">Figure 8
<p>The AHF estimation results of the three indexes. (<b>a1</b>–<b>a3</b>) VANUI; (<b>b1</b>–<b>b3</b>) normalized nighttime light data (NTL<sub>nor</sub>); (<b>c1</b>–<b>c3</b>) HSI.</p>
Full article ">
18 pages, 5396 KiB  
Article
Aquarius Sea Surface Salinity Gridding Method Based on Dual Quality–Distance Weighting
by Yanyan Li, Qing Dong and Yongzheng Ren
Remote Sens. 2019, 11(9), 1131; https://doi.org/10.3390/rs11091131 - 11 May 2019
Cited by 4 | Viewed by 3397
Abstract
A new method for improving the accuracy of gridded sea surface salinity (SSS) fields is proposed in this paper. The method mainly focuses on dual quality–distance weighting of the Aquarius level 2 along-track SSS data according to quality flags, which represent nonnominal data [...] Read more.
A new method for improving the accuracy of gridded sea surface salinity (SSS) fields is proposed in this paper. The method mainly focuses on dual quality–distance weighting of the Aquarius level 2 along-track SSS data according to quality flags, which represent nonnominal data conditions for measurements. In the weighting progress, 14 data conditions were considered, and their geospatial distributions and influences on the SSS were also visualized and evaluated. Three interpolation methods were employed, and weekly gridded SSS maps were produced for the period from September 2011 to May 2015. These maps were evaluated via comparisons with concurrent Argo buoy measurements. The results show that the proposed method improved the accuracy of the SSS fields by approximately 36% compared to the officially released weekly level 3 products and yielded root mean squared difference (RMSD), correlation and bias values of 0.19 psu, 0.98 and 0.01 psu, respectively. These findings indicate a significant improvement in the accuracy of the SSS fields and provide a better understanding of the influences of different conditions on salinity. Full article
(This article belongs to the Special Issue Satellite Monitoring of Water Quality and Water Environment)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Mean spatial bias correction fields (psu) for Aquarius (left) ascending and (right) descending data: (top) beam 1, (middle) beam 2, and (bottom) beam 3.</p>
Full article ">Figure 2
<p>Relationships between the RMSD and (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">k</mi> <mn>1</mn> </msub> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">k</mi> <mn>2</mn> </msub> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">k</mi> <mn>3</mn> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Weekly sea surface salinity (SSS) fields from Aquarius for 2 July–8 July 2013 constructed using different algorithms: (<b>a</b>) weighted average fitting (WAF), (<b>b</b>) weighted unary linear fitting (WULF), (<b>c</b>) weighted binary linear fitting (WBLF), and (<b>d</b>) the Aquarius SSS L3 product data provided by Aquarius Data Processing System (ADPS).</p>
Full article ">Figure 4
<p>Time series of the weekly (<b>a</b>) root mean squared differences (RMSDs), (<b>b</b>) biases and (<b>c</b>) correlations between the Argo buoy data and the four Aquarius SSS analyses: WAF (magenta), weighted unary linear fitting (WULF) (green), WBLF (blue) and the official L3 SSS products provided by ADPS (red). The error statistics were calculated by comparing the Argo buoy measurements for a given week with the SSS values at the same locations obtained through interpolation of the corresponding SSS maps.</p>
Full article ">Figure 5
<p>Statistics of the differences between the Argo buoy data and the results of the four Aquarius SSS analyses: (<b>a</b>) WAF, (<b>b</b>) WULF, (<b>c</b>) WBLF, and (<b>d</b>) the official standard L3 SSS products provided by ADPS. The error statistics were calculated by comparing the Argo buoy measurements for all weeks between September 2011 and May 2015 with the SSS values at the same locations obtained through interpolation of the corresponding Aquarius SSS maps.</p>
Full article ">Figure 6
<p>Scatter plots of the results of the Aquarius weekly SSS analyses and the collocated Argo buoy data. The Aquarius SSS analyses are (<b>a</b>) WAF, (<b>b</b>) WULF, (<b>c</b>) WBLF, and (<b>d</b>) the official standard L3 SSS product provided by ADPS. The colors represent the number of points in each 0.1 psu bin. The error statistics were calculated by comparing the Argo buoy measurements with the SSS values at the same locations obtained via interpolation of the corresponding Aquarius SSS maps for all weeks between September 2011 and May 2015.</p>
Full article ">Figure 7
<p>Distribution of each condition. The colors indicate the normalized frequency of each condition in each 2° × 2° bin. The conditions represented in panels (<b>a</b>) to (<b>n</b>) directly correspond to those listed in <a href="#remotesensing-11-01131-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 8
<p>SSS difference map between the WAF and ADPS products (WAF - ADPS): (<b>a</b>) is the total difference, (<b>b</b>) is the part of the difference due to the large-scale bias adjustment, and (<b>c</b>) is the part of the difference due to the weighting process.</p>
Full article ">
16 pages, 4058 KiB  
Article
Satellite-based Cloudiness and Solar Energy Potential in Texas and Surrounding Regions
by Shuang Xia, Alberto M. Mestas-Nuñez, Hongjie Xie and Rolando Vega
Remote Sens. 2019, 11(9), 1130; https://doi.org/10.3390/rs11091130 - 11 May 2019
Cited by 2 | Viewed by 3413
Abstract
Global horizontal irradiance (i.e., shortwave downward solar radiation received by a horizontal surface on the ground) is an important geophysical variable for climate and energy research. Since solar radiation is attenuated by clouds, its variability is intimately associated with the variability of cloud [...] Read more.
Global horizontal irradiance (i.e., shortwave downward solar radiation received by a horizontal surface on the ground) is an important geophysical variable for climate and energy research. Since solar radiation is attenuated by clouds, its variability is intimately associated with the variability of cloud properties. The spatial distribution of clouds and the daily, monthly, seasonal, and annual solar energy potential (i.e., the solar energy available to be converted into electricity) derived from satellite estimates of global horizontal irradiance are explored over the state of Texas, USA and surrounding regions, including northern Mexico and the western Gulf of Mexico. The maximum (minimum) monthly solar energy potential in the study area is 151–247 kWhm−2 (43–145 kWhm−2) in July (December). The maximum (minimum) seasonal solar energy potential is 457–706 kWhm−2 (167–481 kWhm−2) in summer (winter). The available annual solar energy in 2015 was 1295–2324 kWhm−2. The solar energy potential is significantly higher over the Gulf of Mexico than over land despite the ocean waters having typically more cloudy skies. Cirrus is the dominant cloud type over the Gulf which attenuates less solar irradiance compared to other cloud types. As expected from our previous work, there is good agreement between satellite and ground estimates of solar energy potential in San Antonio, Texas, and we assume this agreement applies to the surrounding larger region discussed in this paper. The study underscores the relevance of geostationary satellites for cloud/solar energy mapping and provides useful estimates on solar energy in Texas and surrounding regions that could potentially be harnessed and incorporated into the electrical grid. Full article
(This article belongs to the Special Issue Feature Papers for Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The left panel shows the state boundaries of the contiguous United States to the north and of Mexico to the south with the location of the study area (i.e., Texas and surrounding regions) superimposed as a blue box. The city of San Antonio, Texas is indicated as a small open square near the center of the study region. The right panel is an expanded view of San Antonio showing the locations of the two ground stations (black dots) and their overlapping satellite cells (black boxes) relative to the city’s main highways (blue lines).</p>
Full article ">Figure 2
<p>Percentage plots of cloud-type frequency (color bars) and reduced solar irradiance (color error bars of mean ± 1 standard deviation) versus the seven cloud types of <a href="#remotesensing-11-01130-t001" class="html-table">Table 1</a> using GSIP-v2 (left two columns) and GSIP-v3 (right column) datasets for each year in 2009–2016 and for the two station locations (University of Texas at San Antonio (UTSA): Light blue color bars and blue error bars; Alamo Solar Farm (ASF): Pink color bars and red error bars). The year and the number of samples used for each plot (N) are given in the upper right corner of each panel. Note that there are two panels for 2014 because on that year the GSIP-v2 and GSIP-v3 datasets overlap each other.</p>
Full article ">Figure 3
<p>Similar to <a href="#remotesensing-11-01130-f002" class="html-fig">Figure 2</a>, but combining all available data for GSIP-v2 (left panel) and GSIP-v3 (right panel).</p>
Full article ">Figure 4
<p>Similar to <a href="#remotesensing-11-01130-f002" class="html-fig">Figure 2</a>, but using only the GSIP-v3 dataset at the two station locations for four 2.25 h time periods (07:45–10:00, 10:45–13:00, 13:45–16:00, and 16:45–19:00) (left panel) and for the four seasons as defined in the text (right panel).</p>
Full article ">Figure 5
<p>Same as the left panel in <a href="#remotesensing-11-01130-f004" class="html-fig">Figure 4</a>, but for each of the four seasons as defined in the text.</p>
Full article ">Figure 6
<p>The seasonal cloud-type frequency derived from GSIP-v3 data of 2014-2016.</p>
Full article ">Figure 7
<p>Same as <a href="#remotesensing-11-01130-f006" class="html-fig">Figure 6</a>, but for cloud layers.</p>
Full article ">Figure 8
<p>The monthly solar energy potential derived from GSIP-v3 over the study area.</p>
Full article ">Figure 9
<p>The seasonal (2014-2016) and annual (2015) solar energy potential derived from GSIP-v3.</p>
Full article ">Figure 10
<p>The daily, monthly and seasonal accumulative solar energy derived from GSIP-v3 for the two cells where two stations (UTSA and ASF) are located: (<b>a</b>) The daily solar energy derived from satellite GSIP-v3 for the two stations; (<b>b</b>) the daily solar energy of station measurements overlaid on that derived from satellites; (<b>c</b>) the average daily solar energy derived from satellite GSIP-v3 for 365 days; (<b>d</b>) the min, mean, and max daily solar energy derived from satellite GSIP-v3 in each month; (<b>e</b>) monthly solar energy derived from satellite GSIP-v3 and the ground; and (<b>f</b>) the seasonal solar energy derived from satellite GSIP-v3 and the ground (noting UTSA site does not contain enough ground data to make seasonal numbers).</p>
Full article ">
22 pages, 6381 KiB  
Article
Field Intercomparison of Radiometers Used for Satellite Validation in the 400–900 nm Range
by Viktor Vabson, Joel Kuusk, Ilmar Ansko, Riho Vendt, Krista Alikas, Kevin Ruddick, Ave Ansper, Mariano Bresciani, Henning Burmester, Maycira Costa, Davide D’Alimonte, Giorgio Dall’Olmo, Bahaiddin Damiri, Tilman Dinter, Claudia Giardino, Kersti Kangro, Martin Ligi, Birgot Paavel, Gavin Tilstone, Ronnie Van Dommelen, Sonja Wiegmann, Astrid Bracher, Craig Donlon and Tânia Casaladd Show full author list remove Hide full author list
Remote Sens. 2019, 11(9), 1129; https://doi.org/10.3390/rs11091129 - 11 May 2019
Cited by 25 | Viewed by 6503
Abstract
An intercomparison of radiance and irradiance ocean color radiometers (the second laboratory comparison exercise—LCE-2) was organized within the frame of the European Space Agency funded project Fiducial Reference Measurements for Satellite Ocean Color (FRM4SOC) May 8–13, 2017 at Tartu Observatory, Estonia. LCE-2 consisted [...] Read more.
An intercomparison of radiance and irradiance ocean color radiometers (the second laboratory comparison exercise—LCE-2) was organized within the frame of the European Space Agency funded project Fiducial Reference Measurements for Satellite Ocean Color (FRM4SOC) May 8–13, 2017 at Tartu Observatory, Estonia. LCE-2 consisted of three sub-tasks: (1) SI-traceable radiometric calibration of all the participating radiance and irradiance radiometers at the Tartu Observatory just before the comparisons; (2) indoor, laboratory intercomparison using stable radiance and irradiance sources in a controlled environment; (3) outdoor, field intercomparison of natural radiation sources over a natural water surface. The aim of the experiment was to provide a link in the chain of traceability from field measurements of water reflectance to the uniform SI-traceable calibration, and after calibration to verify whether different instruments measuring the same object provide results consistent within the expected uncertainty limits. This paper describes the third phase of LCE-2: The results of the field experiment. The calibration of radiometers and laboratory comparison experiment are presented in a related paper of the same journal issue. Compared to the laboratory comparison, the field intercomparison has demonstrated substantially larger variability between freshly calibrated sensors, because the targets and environmental conditions during radiometric calibration were different, both spectrally and spatially. Major differences were found for radiance sensors measuring a sunlit water target at viewing zenith angle of 139° because of the different fields of view. Major differences were found for irradiance sensors because of imperfect cosine response of diffusers. Variability between individual radiometers did depend significantly also on the type of the sensor and on the specific measurement target. Uniform SI traceable radiometric calibration ensuring fairly good consistency for indoor, laboratory measurements is insufficient for outdoor, field measurements, mainly due to the different angular variability of illumination. More stringent specifications and individual testing of radiometers for all relevant systematic effects (temperature, nonlinearity, spectral stray light, etc.) are needed to reduce biases between instruments and better quantify measurement uncertainties. Full article
(This article belongs to the Special Issue Fiducial Reference Measurements for Satellite Ocean Colour)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Main differences between the field and laboratory measurements of the second laboratory comparison exercise (LCE-2) causing a substantial increase in uncertainty of the field measurements.</p>
Full article ">Figure 2
<p>Pier and diving platform at the southern coast of Lake Kääriku.</p>
Full article ">Figure 3
<p>3D CAD (computer-aided design) drawings of the frames for mounting irradiance (left) and radiance (right) sensors during the outdoor experiment.</p>
Full article ">Figure 4
<p>All the radiance and irradiance radiometers were mounted in common frames during the LCE-2 outdoor experiment. Left frame—irradiance sensors; right frame—radiance sensors.</p>
Full article ">Figure 5
<p>All-sky camera images captured in the middle of the casts used in the intercomparison analysis. Irradiance—C10, C12, C13, C14; blue sky radiance—C8, C12, C13; water radiance—C17, C23. Red dots in C8, C12, C13 indicate approximate view direction of the radiance sensors.</p>
Full article ">Figure 6
<p>Relative variation of 550 nm signal of one RAMSES sensor during irradiance (<b>left</b>) and radiance (<b>right</b>; C8, C12, C13 blue sky; C17 water in cloud shadow; C23 sunlit water) casts selected for intercomparison analysis.</p>
Full article ">Figure 7
<p>Photographs of radiance targets used in the intercomparison analysis. The circles denote approximate FOV of WISP-3 (smallest), RAMSES, and HyperOCR (largest).</p>
Full article ">Figure 8
<p>The angle between red lines marking the directions of HyperOCR and RAMSES sensors was measured to be 1.3° from this image.</p>
Full article ">Figure 9
<p>Irradiance and radiance consensus values in the outdoor experiment. C8, C10, C12, C13, C14—blue sky (<b>radiance</b>) or direct sunshine (<b>irradiance</b>); C17—water in cloud shadow; C23—sunlit water.</p>
Full article ">Figure 10
<p>Irradiance sensors compared to the consensus value. Solid lines—RAMSES sensors; dashed lines—HyperOCR sensors; double line—SR-3500.</p>
Full article ">Figure 11
<p>Radiance sensors compared to the consensus value in the outdoor experiment. C8, C12, C13—blue sky; C17—water in cloud shadow at 139° VZA; C23—sunlit water at 130° VZA. Solid lines—RAMSES sensors; dashed lines—HyperOCR sensors; double lines—SeaPRISM (SP) and SR-3500; dotted lines—WISP-3.</p>
Full article ">Figure 11 Cont.
<p>Radiance sensors compared to the consensus value in the outdoor experiment. C8, C12, C13—blue sky; C17—water in cloud shadow at 139° VZA; C23—sunlit water at 130° VZA. Solid lines—RAMSES sensors; dashed lines—HyperOCR sensors; double lines—SeaPRISM (SP) and SR-3500; dotted lines—WISP-3.</p>
Full article ">Figure 12
<p>Variability between irradiance and radiance sensors. E_cal and L_cal—due to calibration state; E(Lab), L(Low) and L(High)—variability in laboratory intercomparison; E(Sun), L(BlueSky) and L(Water) variability in the field.</p>
Full article ">Figure 13
<p>Normalized cosine response error of five RAMSES sensors.</p>
Full article ">Figure 14
<p>Effect of calibration with tilted to 20° with respect to the incident irradiance sensor.</p>
Full article ">Figure 15
<p>Integrated cosine error of the five RAMSES radiometers.</p>
Full article ">Figure 16
<p>Relative variability due to wavelength error of ±0.3 nm of a radiance sensor.</p>
Full article ">Figure 17
<p>Stray light effects in the outdoor experiment. One RAMSES 8329 irradiance sensor—dashed line; two RAMSES and two HyperOCR radiance sensors: Solid lines with markers—blue sky (C12), and solid lines without markers—sunlit water (C23).</p>
Full article ">
17 pages, 33904 KiB  
Article
DisCountNet: Discriminating and Counting Network for Real-Time Counting and Localization of Sparse Objects in High-Resolution UAV Imagery
by Maryam Rahnemoonfar, Dugan Dobbs, Masoud Yari and Michael J. Starek
Remote Sens. 2019, 11(9), 1128; https://doi.org/10.3390/rs11091128 - 11 May 2019
Cited by 32 | Viewed by 4591
Abstract
Recent deep-learning counting techniques revolve around two distinct features of data—sparse data, which favors detection networks, or dense data where density map networks are used. Both techniques fail to address a third scenario, where dense objects are sparsely located. Raw aerial images represent [...] Read more.
Recent deep-learning counting techniques revolve around two distinct features of data—sparse data, which favors detection networks, or dense data where density map networks are used. Both techniques fail to address a third scenario, where dense objects are sparsely located. Raw aerial images represent sparse distributions of data in most situations. To address this issue, we propose a novel and exceedingly portable end-to-end model, DisCountNet, and an example dataset to test it on. DisCountNet is a two-stage network that uses theories from both detection and heat-map networks to provide a simple yet powerful design. The first stage, DiscNet, operates on the theory of coarse detection, but does so by converting a rich and high-resolution image into a sparse representation where only important information is encoded. Following this, CountNet operates on the dense regions of the sparse matrix to generate a density map, which provides fine locations and count predictions on densities of objects. Comparing the proposed network to current state-of-the-art networks, we find that we can maintain competitive performance while using a fraction of the computational complexity, resulting in a real-time solution. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The difference between front-view of an object in typical ground-based human-centric photograph (<b>Top Left</b>) and top view in aerial images (<b>Right; Bottom Left</b>); Objects in aerial images are small, flat, and sparse; moreover, objects and backgrounds are highly imbalanced. In human-centric photographs, different parts of objects (head, tail, body, legs) are clearly observable while aerial imagery present very coarse features. In addition, aerial imagery presents argumentative features, such as shadows.</p>
Full article ">Figure 2
<p>Our DisCount network: An end-to-end learning framework for counting sparse objects in high-resolution images. It includes two networks: The first network (DiscNet) will select regions and the second network (CountNet) will count the objects inside the selected regions. Convolutions are shown as transparent orange, pooling layers are represented in transparent red, and transposed convolutions are shown in transparent blue.</p>
Full article ">Figure 3
<p>UAV platform used in this research.</p>
Full article ">Figure 4
<p>Orthomosaic image of 600+ acre grazing paddock at Welder Wildlife Foundation taken in December 2015 with the eBee fixed-wing platform and RGB camera.</p>
Full article ">Figure 5
<p>An example of sparsity in our dataset. Translucent area is the background. Transparent patches are labeled as foreground information, and represent only a small percentage of the total area.</p>
Full article ">Figure 6
<p>A visualization of density heat-map generation. Starting with the original image, shown left, a point value is hand labeled at the approximate center of the cow. This value, shown middle, is processed using a Gaussian smoothing kernel, the result is shown right. The sum of all pixel values of the right matrix is 1.</p>
Full article ">Figure 7
<p>The flow of information through DisCountNet. Full feature images are given to DiscNet, which generates a sparse representation of the image to give to CountNet. CountNet then operates on this sparse representation to generate a per-pixel probability that a cow is in a given pixel. These values, when summed up, equal the predicted number of cows in the image.</p>
Full article ">Figure 8
<p>From top to bottom, a source image, its label, and our prediction heat map.</p>
Full article ">Figure 9
<p>From left to right: Source Image regions, heat-map label, and heat-map prediction.</p>
Full article ">
22 pages, 3548 KiB  
Article
Analysis of L-Band SAR Data for Soil Moisture Estimations over Agricultural Areas in the Tropics
by Mehrez Zribi, Sekhar Muddu, Safa Bousbih, Ahmad Al Bitar, Sat Kumar Tomer, Nicolas Baghdadi and Soumya Bandyopadhyay
Remote Sens. 2019, 11(9), 1122; https://doi.org/10.3390/rs11091122 - 11 May 2019
Cited by 51 | Viewed by 8274
Abstract
The main objective of this study is to analyze the potential use of L-band radar data for the estimation of soil moisture over tropical agricultural areas under dense vegetation cover conditions. Ten radar images were acquired using the Phased Array Synthetic Aperture Radar/Advanced [...] Read more.
The main objective of this study is to analyze the potential use of L-band radar data for the estimation of soil moisture over tropical agricultural areas under dense vegetation cover conditions. Ten radar images were acquired using the Phased Array Synthetic Aperture Radar/Advanced Land Observing Satellite (PALSAR/ALOS)-2 sensor over the Berambadi watershed (south India), between June and October of 2018. Simultaneous ground measurements of soil moisture, soil roughness, and leaf area index (LAI) were also recorded. The sensitivity of PALSAR observations to variations in soil moisture has been reported by several authors, and is confirmed in the present study, even for the case of very dense crops. The radar signals are simulated using five different radar backscattering models (physical and semi-empirical), over bare soil, and over areas with various types of crop cover (turmeric, marigold, and sorghum). When the semi-empirical water cloud model (WCM) is parameterized as a function of the LAI, to account for the vegetation’s contribution to the backscattered signal, it can provide relatively accurate estimations of soil moisture in turmeric and marigold fields, but has certain limitations when applied to sorghum fields. Observed limitations highlight the need to expand the analysis beyond the LAI by including additional vegetation parameters in order to take into account volume scattering in the L-band backscattered radar signal for accurate soil moisture estimation. Full article
(This article belongs to the Special Issue Microwave Remote Sensing for Hydrology)
Show Figures

Figure 1

Figure 1
<p>View of the Berambadi watershed, showing the location of studied plots of marigold, sorghum, and turmeric.</p>
Full article ">Figure 2
<p>Illustration of three agricultural fields: (<b>a</b>) turmeric, (<b>b</b>) marigold, (<b>c</b>) sorghum.</p>
Full article ">Figure 3
<p>In situ soil moisture measurements in reference fields during the ground campaigns.</p>
Full article ">Figure 4
<p>Ground measurements of leaf area index (LAI) in reference fields during the experimental campaigns.</p>
Full article ">Figure 5
<p>Phased Array Synthetic Aperture Radar (PALSAR) radar data as a function of soil moisture for bare soils: (<b>a</b>) horizontal-horizontal (HH) polarization, (<b>b</b>) horizontal-vertical (HV) polarization.</p>
Full article ">Figure 6
<p>PALSAR radar data as a function of soil moisture for turmeric cover: (<b>a</b>) HH polarization, LAI &lt;2.5; (<b>b</b>) HH polarization, LAI &gt;2.5; (<b>c</b>) HV polarization, LAI &lt;2.5; (<b>d</b>) HV polarization LAI &gt;2.5.</p>
Full article ">Figure 7
<p>PALSAR radar data as a function of soil moisture for marigold cover: (<b>a</b>) HH polarization, LAI &lt;2.5; (<b>b</b>) HH polarization, LAI &gt;2.5; (<b>c</b>) HV polarization, LAI &lt;2.5; (<b>d</b>) HV polarization, LAI &gt;2.5.</p>
Full article ">Figure 8
<p>PALSAR radar data as a function of soil moisture for sorghum cover: (<b>a</b>) HH polarization, LAI &lt;2.5; (<b>b</b>) HH polarization, LAI &gt;2.5; (<b>c</b>) HV polarization, LAI &lt;2.5; (<b>d</b>) HV polarization, LAI &gt;2.5.</p>
Full article ">Figure 9
<p>Comparison between real radar data and simulations using different physical or semi-empirical models: (<b>a</b>) HH polarization; (<b>b</b>) HV polarization.</p>
Full article ">Figure 10
<p>Comparison between calibrated model and radar data for: (<b>a</b>) turmeric, HH polarization, (<b>b</b>) turmeric, HV polarization, (<b>c</b>) marigold, HH polarization, (<b>d</b>) marigold, HV polarization, (<b>e</b>) sorghum, HH polarization, (<b>f</b>) sorghum, HV polarization.</p>
Full article ">Figure 11
<p>Comparison between ground measurements and estimated soil moisture values using PALSAR radar data, (<b>a</b>) turmeric, HH polarization, (<b>b</b>) turmeric, HV polarization, (<b>c</b>) marigold, HH polarization, (<b>d</b>) marigold, HV polarization.</p>
Full article ">Figure 11 Cont.
<p>Comparison between ground measurements and estimated soil moisture values using PALSAR radar data, (<b>a</b>) turmeric, HH polarization, (<b>b</b>) turmeric, HV polarization, (<b>c</b>) marigold, HH polarization, (<b>d</b>) marigold, HV polarization.</p>
Full article ">
13 pages, 5657 KiB  
Article
Establishment and Assessment of a New GNSS Precipitable Water Vapor Interpolation Scheme Based on the GPT2w Model
by Fei Yang, Jiming Guo, Xiaolin Meng, Junbo Shi and Lv Zhou
Remote Sens. 2019, 11(9), 1127; https://doi.org/10.3390/rs11091127 - 10 May 2019
Cited by 8 | Viewed by 3820
Abstract
With the development of Global Navigation Satellite System (GNSS) reference station networks that provide rich data sources containing atmospheric information, the precipitable water vapor (PWV) retrieved from GNSS remote sensing has become one of the most important bodies of data in many meteorological [...] Read more.
With the development of Global Navigation Satellite System (GNSS) reference station networks that provide rich data sources containing atmospheric information, the precipitable water vapor (PWV) retrieved from GNSS remote sensing has become one of the most important bodies of data in many meteorological departments. GNSS stations are distributed in the form of scatters, generally, these separations range from a few kilometers to tens of kilometers. Therefore, the spatial resolution of GNSS-PWV can restrict some applications such as interferometric synthetic aperture radar (InSAR) atmospheric calibration and regional atmospheric water vapor analysis, which inevitably require the spatial interpolation of GNSS-PWV. This paper explored a PWV interpolation scheme based on the GPT2w model, which requires no meteorological data at an interpolation station and no regression analysis of the observation data. The PWV interpolation experiment was conducted in Hong Kong by different interpolation schemes, which differed in whether the impact of elevation was considered and whether the GPT2w model was added. In this paper, we adopted three skill scores, i.e., compound relative error (CRE), mean absolute error (MAE), and root mean square error (RMSE), and two approaches, i.e., station cross-validation and grid data validation, for our comparison. Numerical results showed that the interpolation schemes adding the GPT2w model could greatly improve the PWV interpolation accuracy when compared to the traditional schemes, especially at interpolation points away from the elevation range of reference stations. Moreover, this paper analyzed the PWV interpolation results under different weather conditions, at different locations, and on different days. Full article
(This article belongs to the Special Issue GPS/GNSS for Earth Science and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of PWV interpolation with the GPT2w model.</p>
Full article ">Figure 2
<p>Geographic distribution of GNSS stations and their elevations.</p>
Full article ">Figure 3
<p>Scatter (<b>upper</b>) and boxplot (<b>lower</b>) of the observed PWV at each station during the period of rainless and rainy days. Blue for the rainy condition and red for the rainless condition.</p>
Full article ">Figure 4
<p>RMSE of different interpolation schemes on each day for station cross-validation.</p>
Full article ">Figure 5
<p>Map showing RMSE at the 12 stations for the 10 interpolation schemes in rainy weather condition. (<b>a</b>) IDW, (<b>b</b>) IDW-GPT2w, (<b>c</b>) Kriging, (<b>d</b>) Kriging-GPT2w, (<b>e</b>) 3DKriging, (<b>f</b>) 3DKriging-GPT2w, (<b>g</b>) TPS, (<b>h</b>) TPS-GPT2w, (<b>i</b>) 3DTPS, (<b>j</b>) 3DTPS-GPT2w.</p>
Full article ">Figure 6
<p>Histogram of RMSE at 25 grid points for the 10 interpolation schemes.</p>
Full article ">
15 pages, 2684 KiB  
Article
A Rapid and Automated Urban Boundary Extraction Method Based on Nighttime Light Data in China
by Xiaojiang Liu, Xiaogang Ning, Hao Wang, Chenggang Wang, Hanchao Zhang and Jing Meng
Remote Sens. 2019, 11(9), 1126; https://doi.org/10.3390/rs11091126 - 10 May 2019
Cited by 26 | Viewed by 4089
Abstract
As urbanization has progressed over the past 40 years, continuous population growth and the rapid expansion of urban land use have caused some regions to experience various problems, such as insufficient resources and issues related to the environmental carrying capacity. The urbanization process [...] Read more.
As urbanization has progressed over the past 40 years, continuous population growth and the rapid expansion of urban land use have caused some regions to experience various problems, such as insufficient resources and issues related to the environmental carrying capacity. The urbanization process can be understood using nighttime light data to quickly and accurately extract urban boundaries at large scales. A new method is proposed here to quickly and accurately extract urban boundaries using nighttime light imagery. Three types of nighttime light data from the DMSP/OLS (US military’s defense meteorological satellite), NPP-VIIRS (National Polar-orbiting Partnership-Visible Infrared Imaging Radiometer Suite), and Luojia1-01 data sets are selected, and the high-precision urban boundaries obtained from a high-resolution image are selected as the true value. Next, 15 cities are selected as the training samples, and the Jaccard coefficient is introduced. The spatial data comparison method is then used to determine the optimal threshold function for the urban boundary extraction. Alternative high-precision urban boundary truth-values for the 13 cities are then selected, and the accuracy of the urban boundary extraction results obtained using the optimal threshold function and the mutation detection method are evaluated. The following observations are made from the results: (i) The average relative errors for the urban boundary extraction results based on the three nighttime light data sources (DMSP/OLS, NPP-VIIRS, and Luojia1-01) using the optimal threshold functions are 29%, 20%, and 39%, respectively. Compared with the mutation detection method, these relative errors are reduced by 83%, 18%, and 77%, respectively; (ii) The average overall classification accuracies of the extracted urban boundaries are 95%, 96%, and 93%, respectively, which are 5%, 1%, and 7% higher than those for the mutation detection method; (iii) The average Kappa coefficients of the extracted urban boundaries are 61%, 71%, and 61%, respectively, which are 5%, 4%, and 12% higher than for the mutation detection method. Full article
(This article belongs to the Special Issue Advances in Remote Sensing with Nighttime Lights)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Urban boundary extraction technology roadmaps using nighttime light data.</p>
Full article ">Figure 2
<p>Schematic diagram showing the threshold changing process (NTL is the values in the nighttime light image).</p>
Full article ">Figure 3
<p>DMSP/OLS data frequency distribution: The first is distribution of the DMSP/OL pixel value frequency in Tianjin and the second is the distribution of the DMSP/OL pixel value frequency in Nanjing.</p>
Full article ">Figure 4
<p>Correlation analysis of the nighttime light data histogram features and sample area optimal thresholds (The first is DMSP/OLS threshold analysis. The second is NPP-VIIRS threshold analysis. The last is Luojia1-01 threshold analysis).</p>
Full article ">Figure 5
<p>Luojia1-01 image frequency histogram.</p>
Full article ">Figure 6
<p>Urban boundary extraction results based on the nighttime light images for DMSP/OLS, NPP-VIIRS, and Luojia1-01 (The first line is the result of the mutation detection method, and the second line is the result of proposed method in the results of each data source.).</p>
Full article ">
20 pages, 9960 KiB  
Article
Surfaces of Revolution (SORs) Reconstruction Using a Self-Adaptive Generatrix Line Extraction Method from Point Clouds
by Xianglei Liu, Ming Huang, Shanlei Li and Chaoshuai Ma
Remote Sens. 2019, 11(9), 1125; https://doi.org/10.3390/rs11091125 - 10 May 2019
Cited by 2 | Viewed by 3532
Abstract
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and [...] Read more.
This paper presents an automatic reconstruction algorithm of surfaces of revolution (SORs) with a self-adaptive method for generatrix line extraction from point clouds. The proposed method does not need to calculate the normal of point clouds, which can greatly improve the efficiency and accuracy of SORs reconstruction. Firstly, the rotation axis of a SOR is automatically extracted by a minimum relative deviation among the three axial directions for both tall-thin and short-wide SORs. Secondly, the projection profile of a SOR is extracted by the triangulated irregular network (TIN) model and random sample consensus (RANSAC) algorithm. Thirdly, the point set of a generatrix line of a SOR is determined by searching for the extremum of coordinate Z, together with overflow points processing, and further determines the type of generatrix line by the smaller RMS errors between linear fitting and quadratic curve fitting. In order to validate the efficiency and accuracy of the proposed method, two kinds of SORs, simple SORs with a straight generatrix line and complex SORs with a curved generatrix line are selected for comparison analysis in the paper. The results demonstrate that the proposed method is robust and can reconstruct SORs with a higher accuracy and efficiency based on the point clouds. Full article
(This article belongs to the Special Issue Point Cloud Processing in Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Flowchart of the automatic SORs reconstruction from point clouds.</p>
Full article ">Figure 2
<p>Two kinds of SORs. (<b>a</b>) A tall-thin type SOR and (<b>b</b>) a short-wide type SOR.</p>
Full article ">Figure 3
<p>Rotation axis extraction for a short-wide SOR of a straw hat. (<b>a</b>) The point cloud of a straw hat with three axial directions, (<b>b</b>) Two parts of the point cloud divided by the plane <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> according to axial direction 1, (<b>c</b>) Two parts of the point cloud divided by the plane <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> according to axial direction 2, and (<b>d</b>) Two parts of the point cloud divided by the plane <math display="inline"><semantics> <mrow> <mi>Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> according to axial direction 3.</p>
Full article ">Figure 4
<p>Circular contour fitting. (<b>a</b>) A point set of the top of a straw-hat, (<b>b</b>) Constructed TIN model, (<b>c</b>) Fitted circular contour by the RANSAC algorithm, and (<b>d</b>) Fitted circular contour only by the RANSAC algorithm.</p>
Full article ">Figure 5
<p>Original point-cloud (gray points) and the extracted projection profile of a straw hat (magenta points).</p>
Full article ">Figure 6
<p>Schematic diagram of extracting the boundary <span class="html-italic">X</span> of the projection profile.</p>
Full article ">Figure 7
<p>Overflow points processing. (<b>a</b>) The extracted point set of boundary X containing overflow points and (<b>b</b>) the processed point set of boundary X without overflow points.</p>
Full article ">Figure 8
<p>3D spatial data hyperfine modeling system.</p>
Full article ">Figure 9
<p>Reconstructed SOR of a cylinder. (<b>a</b>) A photo of the cylinder, (<b>b</b>) Original point cloud of the cylinder, (<b>c</b>) Reconstructed SOR of the cylinder by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the cylinder by the curvature computation method, (<b>e</b>) Reconstructed SOR of the cylinder by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the cylinder by the proposed method.</p>
Full article ">Figure 10
<p>Reconstructed SOR of a frustum of a cone. (<b>a</b>) A photo of the frustum of a cone, (<b>b</b>) Original point cloud of the frustum of a cone, (<b>c</b>) Reconstructed SOR of the frustum of a cone by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the frustum of a cone by the curvature computation method, (<b>e</b>) Reconstructed SOR of the frustum of a cone by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the frustum of a cone by the proposed method.</p>
Full article ">Figure 11
<p>Reconstructed SOR of a vase. (<b>a</b>) A photo of the frustum of the vase, (<b>b</b>) Original point cloud of the vase, (<b>c</b>) Reconstructed SOR of the vase by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the vase by the curvature computation method, (<b>e</b>) Reconstructed SOR of the vase by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the vase by the proposed method.</p>
Full article ">Figure 12
<p>Reconstructed SOR of a pillar of an ancient building. (<b>a</b>) An image of the pillar of an ancient building, (<b>b</b>) Original point cloud of the pillar of an ancient building, (<b>c</b>) Reconstructed SOR of the pillar of an ancient building by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the pillar of an ancient building by the curvature computation method, (<b>e</b>) Reconstructed SOR of the pillar of an ancient building by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the pillar of an ancient building by the proposed method.</p>
Full article ">Figure 13
<p>Reconstructed SOR of a pot. (<b>a</b>) A photo of the frustum of the pot, (<b>b</b>) Original point cloud of the pot, (<b>c</b>) Reconstructed SOR of the pot by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the pot by the curvature computation method, (<b>e</b>) Reconstructed SOR of the pot by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the pot by the proposed method.</p>
Full article ">Figure 14
<p>Reconstructed SOR of a ceramic. (<b>a</b>) A photo of the frustum of the ceramic, (<b>b</b>) Original point cloud of the ceramic, (<b>c</b>) Reconstructed SOR of the ceramic by the curvature computation method, (<b>d</b>) Cross-section of the reconstructed SOR of the ceramic by the curvature computation method, (<b>e</b>) Reconstructed SOR of the ceramic by the proposed method, and (<b>f</b>) Cross-section of the reconstructed SOR of the ceramic by the proposed method.</p>
Full article ">Figure 15
<p>Reconstructed SOR of a pillar of an ancient building. (<b>a</b>) Reconstructed SOR by the Delaunay-based SOR reconstruction method, (<b>b</b>) Reconstructed SOR by the Poisson SOR reconstruction method, (<b>c</b>) Reconstructed SOR by the RBF SOR reconstruction method, and (<b>d</b>) Reconstructed SOR by the proposed method.</p>
Full article ">Figure 16
<p>Reconstructed SOR of a simple SOR with different sampling rate. (<b>a</b>) Reconstructed SOR with a sampling rate of 100%, (<b>b</b>) Reconstructed SOR with a sampling rate of 75%, (<b>c</b>) Reconstructed SOR with a sampling rate of 50%, and (<b>d</b>) Reconstructed SOR with a sampling rate of 50%.</p>
Full article ">Figure 17
<p>Reconstructed SOR of a tall-thin SOR with different sampling rate. (<b>a</b>) Reconstructed SOR with a sampling rate of 100%, (<b>b</b>) Reconstructed SOR with a sampling rate of 75%, (<b>c</b>) Reconstructed SOR with a sampling rate of 50%, and (<b>d</b>) Reconstructed SOR with a sampling rate of 50%.</p>
Full article ">Figure 18
<p>Reconstructed SOR of a short-wide SOR with different sampling rate. (<b>a</b>) Reconstructed SOR with a sampling rate of 100%, (<b>b</b>) Reconstructed SOR with a sampling rate of 75%, (<b>c</b>) Reconstructed SOR with a sampling rate of 50%, and (<b>d</b>) Reconstructed SOR with a sampling rate of 50%.</p>
Full article ">
21 pages, 8939 KiB  
Technical Note
FORCE—Landsat + Sentinel-2 Analysis Ready Data and Beyond
by David Frantz
Remote Sens. 2019, 11(9), 1124; https://doi.org/10.3390/rs11091124 - 10 May 2019
Cited by 177 | Viewed by 16417
Abstract
Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution [...] Read more.
Ever increasing data volumes of satellite constellations call for multi-sensor analysis ready data (ARD) that relieve users from the burden of all costly preprocessing steps. This paper describes the scientific software FORCE (Framework for Operational Radiometric Correction for Environmental monitoring), an ‘all-in-one’ solution for the mass-processing and analysis of Landsat and Sentinel-2 image archives. FORCE is increasingly used to support a wide range of scientific to operational applications that are in need of both large area, as well as deep and dense temporal information. FORCE is capable of generating Level 2 ARD, and higher-level products. Level 2 processing is comprised of state-of-the-art cloud masking and radiometric correction (including corrections that go beyond ARD specification, e.g., topographic or bidirectional reflectance distribution function correction). It further includes data cubing, i.e., spatial reorganization of the data into a non-overlapping grid system for enhanced efficiency and simplicity of ARD usage. However, the usage barrier of Level 2 ARD is still high due to the considerable data volume and spatial incompleteness of valid observations (e.g., clouds). Thus, the higher-level modules temporally condense multi-temporal ARD into manageable amounts of spatially seamless data. For data mining purposes, per-pixel statistics of clear sky data availability can be generated. FORCE provides functionality for compiling best-available-pixel composites and spectral temporal metrics, which both utilize all available observations within a defined temporal window using selection and statistical aggregation techniques, respectively. These products are immediately fit for common Earth observation analysis workflows, such as machine learning-based image classification, and are thus referred to as highly analysis ready data (hARD). FORCE provides data fusion functionality to improve the spatial resolution of (i) coarse continuous fields like land surface phenology and (ii) Landsat ARD using Sentinel-2 ARD as prediction targets. Quality controlled time series preparation and analysis functionality with a number of aggregation and interpolation techniques, land surface phenology retrieval, and change and trend analyses are provided. Outputs of this module can be directly ingested into a geographic information system (GIS) to fuel research questions without any further processing, i.e., hARD+. FORCE is open source software under the terms of the GNU General Public License v. >= 3, and can be downloaded from http://force.feut.de. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of gridding and data cube terminology used in FORCE (Framework for Operational Radiometric Correction for Environmental monitoring).</p>
Full article ">Figure 2
<p>Overview of FORCE, general workflow. ARD—analysis ready data; hARD—highly analysis ready data; hARD+—highly analysis ready data plus; DEM—digital elevation model; CSO—clear sky observation; LSP—land surface phenology; CF—continuous field; CR—coarse resolution; MR—medium resolution; WVDB—water vapor database; ESA—European Space Agency; USGS—U.S. Geological Survey (USGS); NASA—National Aeronautics and Space Administration.</p>
Full article ">Figure 3
<p>FORCE Level 1 Archiving Suite (L1AS) workflow.</p>
Full article ">Figure 4
<p>FORCE Level 2 Processing System (L2PS) workflow. TOA—top-of-atmosphere; BT—brightness temperature; BOA—bottom-of-atmosphere; QAI—quality assurance information.</p>
Full article ">Figure 5
<p>Data cube of Landsat 7/8 and Sentinel-2 A/B Level 2 ARD for Southeast Berlin, Germany. A two-month period of true color image chips for one 30 × 30 km tile is shown.</p>
Full article ">Figure 6
<p>General concept of higher-level FORCE processing.</p>
Full article ">Figure 7
<p>Processing scheme of FORCE clear sky observations (CSO).</p>
Full article ">Figure 8
<p>Processing workflow of the FORCE Level 3 Processing System (L3PS). STM—spectral temporal metric; BAP—best-available pixel composite.</p>
Full article ">Figure 9
<p>Best-available-pixel composite (near-infrared, shortwave infrared, red in RGB) for Angola, Zambia, Zimbabwe, Botswana, and Namibia. The 250, 25, and 2.5 km subsets provide different zoom levels of the composited data. The composite is temporally centered at the end of season land surface phenology metric for 2018. The land surface phenology was derived from the Moderate Resolution Imaging Spectroradiometer (MODIS), and its spatial resolution was enhanced with the FORCE ImproPhe code (see <a href="#sec3dot3dot5-remotesensing-11-01124" class="html-sec">Section 3.3.5</a>).</p>
Full article ">Figure 10
<p>Processing workflow of FORCE time series analysis (TSA). All products indicated by a USB plug can be output; all products indicated by * can be centered/standardized before output.</p>
Full article ">Figure 11
<p>Land surface phenology-based trend and change analysis for Crete, Greece. The change, aftereffect, trend (CAT) transformation shows both long-term (30+ years) gradual, and abrupt changes. The CAT transform was applied to the annual value of base-level phenometric time series, which was itself derived by inferring land surface phenology metrics from dense time series of green vegetation abundance, derived from spectral mixture analysis (SMA) of Landsat ARD.</p>
Full article ">Figure 12
<p>Land surface phenology metrics at coarse resolution (MODIS-derived, 500 m) and with improved spatial resolution at 30 m for an image subset in Brandenburg, Germany. Depicted are (rate of maximum rise, integral of green season, and value of early minimum in RGB). Using the FORCE ImproPhe module, the spatial resolution was enhanced using multi-temporal Landsat and Sentinel-2 A/B prediction targets.</p>
Full article ">Figure 13
<p>Landsat ARD at original 30 m resolution (<b>top</b>), and Landsat ARD with improved spatial resolution at 10 m (<b>bottom</b>) for image subsets from North Rhine Westphalia, Germany. Using the FORCE L2IMP module, the spatial resolution was enhanced using multi-temporal Sentinel-2 A/B prediction targets.</p>
Full article ">
20 pages, 17514 KiB  
Article
Automatic Post-Disaster Damage Mapping Using Deep-Learning Techniques for Change Detection: Case Study of the Tohoku Tsunami
by Jérémie Sublime and Ekaterina Kalinicheva
Remote Sens. 2019, 11(9), 1123; https://doi.org/10.3390/rs11091123 - 10 May 2019
Cited by 98 | Viewed by 8616
Abstract
Post-disaster damage mapping is an essential task following tragic events such as hurricanes, earthquakes, and tsunamis. It is also a time-consuming and risky task that still often requires the sending of experts on the ground to meticulously map and assess the damages. Presently, [...] Read more.
Post-disaster damage mapping is an essential task following tragic events such as hurricanes, earthquakes, and tsunamis. It is also a time-consuming and risky task that still often requires the sending of experts on the ground to meticulously map and assess the damages. Presently, the increasing number of remote-sensing satellites taking pictures of Earth on a regular basis with programs such as Sentinel, ASTER, or Landsat makes it easy to acquire almost in real time images from areas struck by a disaster before and after it hits. While the manual study of such images is also a tedious task, progress in artificial intelligence and in particular deep-learning techniques makes it possible to analyze such images to quickly detect areas that have been flooded or destroyed. From there, it is possible to evaluate both the extent and the severity of the damages. In this paper, we present a state-of-the-art deep-learning approach for change detection applied to satellite images taken before and after the Tohoku tsunami of 2011. We compare our approach with other machine-learning methods and show that our approach is superior to existing techniques due to its unsupervised nature, good performance, and relative speed of analysis. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Two ASTER images taken on (<b>a</b>) July 2010 and (<b>b</b>) November 2010. Image (<b>a</b>) was taken in sunny conditions that caused much higher pixel values for urban area pixels (zoomed) than for image (<b>b</b>). For example, the value of the same pixel of this area is equal (83, 185, 126) for (<b>a</b>) and (37, 63, 81) for (<b>b</b>). Moreover, a great part of image (<b>b</b>) is covered by clouds and their shadow.</p>
Full article ">Figure 2
<p>The 3 steps of satellite image processing.</p>
Full article ">Figure 3
<p>Basic architecture of a single layer autoencoder made of an encoder going from the input layer to the bottleneck and the decoder from the bottleneck to the output layers.</p>
Full article ">Figure 4
<p>Algorithm flowchart.</p>
Full article ">Figure 5
<p>Fully convolutional AE model.</p>
Full article ">Figure 6
<p>Images taken over the damaged area, (<b>a</b>) 7 July 2010, (<b>b</b>) 29 November 2010, (<b>c</b>) 19 March 2011.</p>
Full article ">Figure 7
<p>Change detection results. (<b>a</b>) image taken on 29 November 2010, (<b>b</b>) image taken on 19 March 2011, (<b>c</b>) ground truth, (<b>d</b>) average RE image of the proposed method, (<b>e</b>) proposed method CM, (<b>f</b>) RBM.</p>
Full article ">Figure 8
<p>Change detection results. (<b>a</b>) image taken on 7 July 2010, (<b>b</b>) image taken on 19 March 2011, (<b>c</b>) ground truth, (<b>d</b>) average RE image of the proposed method, (<b>e</b>) proposed method CM, (<b>f</b>) RBM.</p>
Full article ">Figure 9
<p>Clustering results, flooded area. (<b>a</b>) image taken on 7 July 2010, (<b>b</b>) image taken on 19 March 2011, (<b>c</b>) ground truth, (<b>d</b>) K-Means on subtracted image, (<b>e</b>) K-Means on concatenated encoded images, (<b>f</b>) DEC on concatenated encoded images.</p>
Full article ">Figure 10
<p>Clustering results, destroyed constructions. (<b>a</b>) image taken on 7 July 2010, (<b>b</b>) image taken on 19 March 2011, (<b>c</b>) ground truth, (<b>d</b>) K-Means on subtracted image, (<b>e</b>) K-Means on concatenated encoded images, (<b>f</b>) DEC on concatenated encoded images.</p>
Full article ">Figure 11
<p>(<b>a</b>) Extract of the original post-disaster image (<b>b</b>) Clustering results with 4 clusters from the DEC algorithm. On the left is the post-disaster image, on the right the clustering applied within a 5 km distance from the shore. We have the following clusters: (1) In white, no change. (2) In blue, flooded areas. (3) In red, damaged constructions. (4) In purple, other changes.</p>
Full article ">Figure 12
<p>(<b>a</b>) Extract of the original post-disaster image (<b>b</b>) Clustering results with 4 clusters from the DEC algorithm. On the left is the post-disaster image, on the right the clustering with the following clusters: (1) In white, no change. (2) In blue, flooded areas. (3) In red, damaged constructions. (4) In purple, other changes.</p>
Full article ">
21 pages, 13168 KiB  
Article
Quality Assessment and Glaciological Applications of Digital Elevation Models Derived from Space-Borne and Aerial Images over Two Tidewater Glaciers of Southern Spitsbergen
by Małgorzata Błaszczyk, Dariusz Ignatiuk, Mariusz Grabiec, Leszek Kolondra, Michał Laska, Leo Decaux, Jacek Jania, Etienne Berthier, Bartłomiej Luks, Barbara Barzycka and Mateusz Czapla
Remote Sens. 2019, 11(9), 1121; https://doi.org/10.3390/rs11091121 - 10 May 2019
Cited by 31 | Viewed by 6288
Abstract
In this study, we assess the accuracy and precision of digital elevation models (DEM) retrieved from aerial photographs taken in 2011 and from Very High Resolution satellite images (WorldView-2 and Pléiades) from the period 2012–2017. Additionally, the accuracy of the freely available Strip [...] Read more.
In this study, we assess the accuracy and precision of digital elevation models (DEM) retrieved from aerial photographs taken in 2011 and from Very High Resolution satellite images (WorldView-2 and Pléiades) from the period 2012–2017. Additionally, the accuracy of the freely available Strip product of ArcticDEM was verified. We use the DEMs to characterize geometry changes over Hansbreen and Hornbreen, two tidewater glaciers in southern Spitsbergen, Svalbard. The satellite-based DEMs from WorldView-2 and Pléiades stereo pairs were processed using the Rational Function Model (RFM) without and with one ground control point. The elevation quality of the DEMs over glacierized areas was validated with in situ data: static differential GPS survey of mass balance stakes and GPS kinematic data acquired during ground penetrating radar survey. Results demonstrate the usefulness of the analyzed sources of DEMs for estimation of the total geodetic mass balance of the Svalbard glaciers. DEM accuracy is sufficient to investigate glacier surface elevation changes above 1 m. Strips from the ArcticDEM are generally precise, but some of them showed gross errors and need to be handled with caution. The surface of Hansbreen and Hornbreen has been lowering in recent years. The average annual elevation changes for Hansbreen were more negative in the period 2015–2017 (−2.4 m a−1) than in the period 2011–2015 (−1.7 m a−1). The average annual elevation changes over the studied area of Hornbreen for the period 2012–2017 amounted to −1.6 m a−1. The geodetic mass balance for Hansbreen was more negative than the climatic mass balance estimated using the mass budget method, probably due to underestimation of the ice discharge. From 2011 to 2017, Hansbreen lost on average over 1% of its volume each year. Such a high rate of relative loss illustrates how fast these glaciers are responding to climate change. Full article
(This article belongs to the Special Issue Remote Sensing of Glaciers at Global and Regional Scales)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area. The orange outlines represent the extent of WorldView-2 stereo pairs, blue outlines present the extent of Pléiades stereo pairs, and green outline presents the extent of aerial photographs used in the study. The black rectangle on the overview map shows the location of the study area within the Svalbard archipelago, and asterisk present location of Kongsvegen and Kronebreen, two tidewater glaciers mentioned in the text. Background: Sentinel-2 image from 06.07.2018.</p>
Full article ">Figure 2
<p>Flowchart of the data processing and result analysis. Different colors indicate processing steps: white indicates the data sources, grey presents digital elevation models (DEM) generation, green presents validation step and data used in validation step and blue shows glaciological interpretation. ArcticDEM was validated with in-situ data, but was not used in further glaciological analysis.</p>
Full article ">Figure 3
<p>DEM 2011 derived for Hansbreen from aerial photographs (<b>a</b>). Differences between DEM 2011 and GPS precise static survey of mass balance stakes from 2011 (squares), and DEM and kinematic GPS from 2011 (dots) are presented over DEM 2011 (<b>a</b>) and as scatter plot of elevation differences with altitude (<b>b</b>).</p>
Full article ">Figure 4
<p>Orthoimages (<b>a</b>,<b>e</b>) and hillshaded DEMs (<b>b</b>,<b>c</b>,<b>f</b>,<b>g</b>) derived for Hansbreen. Differences between DEMs and GPR/GPS (dots), and DEMs and precise elevation of mass balance stakes (squares) (<b>a</b>,<b>e</b>) are estimated for the DEM generated with one GCP. Yellow triangle indicates the position of the GCP used to generate the DEM. Red ovals locate artifacts on DEMs generated without a GCP (<b>b</b>,<b>f</b>) and with one GCP (<b>c</b>,<b>g</b>). Bottom panels (<b>d</b>,<b>h</b>) show scatter plot of differences with elevation.</p>
Full article ">Figure 5
<p>Orthoimages (<b>a</b>,<b>e</b>) and hillshades of the DEMs (<b>b</b>,<b>c</b>,<b>f</b>,<b>g</b>) derived for Hornbreen. Differences between DEMs and GPR/GPS (<b>a</b>,<b>e</b>) are estimated for the DEM generated with one GCP. Yellow triangle indicates the position of the GCP used to generate the DEM. Red ovals locate artifacts on DEMs generated without a GCP (<b>b</b>,<b>f</b>) and with one GCP (<b>c</b>,<b>g</b>). Bottom panels (<b>d</b>,<b>h</b>) show scatter plot of differences with elevation.</p>
Full article ">Figure 5 Cont.
<p>Orthoimages (<b>a</b>,<b>e</b>) and hillshades of the DEMs (<b>b</b>,<b>c</b>,<b>f</b>,<b>g</b>) derived for Hornbreen. Differences between DEMs and GPR/GPS (<b>a</b>,<b>e</b>) are estimated for the DEM generated with one GCP. Yellow triangle indicates the position of the GCP used to generate the DEM. Red ovals locate artifacts on DEMs generated without a GCP (<b>b</b>,<b>f</b>) and with one GCP (<b>c</b>,<b>g</b>). Bottom panels (<b>d</b>,<b>h</b>) show scatter plot of differences with elevation.</p>
Full article ">Figure 6
<p>Elevation differences between the co-registered 2017 Pléiades DEM (1 GCP) and the 2011 aerial photo DEM over the stable terrain around Hansbreen as a function of slope. For each 5° class of slopes, the median of the elevation difference is shown with a dot and the grey shade indicates the NMAD about each median.</p>
Full article ">Figure 7
<p>Mean annual elevation changes for Hansbreen (<b>a</b>,<b>b</b>) and Hornbreen (<b>c</b>). The black dashed line presents the average firn line in the study periods. The front part of the glaciers that retreated during the analyzed period is not presented. Red hatch symbol represents shadowed areas removed from analyses.</p>
Full article ">Figure 8
<p>Annual average elevation change rates on Hansbreen in 2011–15 (<b>a</b>) and 2015–2017 (<b>b</b>), and Hornbreen in 2012–2015 (<b>c</b>) as a function of elevation (in 50 m elevation ranges). Elevation losses over the marine retreat area are not included.</p>
Full article ">
11 pages, 9243 KiB  
Article
Satellite Cross-Talk Impact Analysis in Airborne Interferometric Global Navigation Satellite System-Reflectometry with the Microwave Interferometric Reflectometer
by Raul Onrubia, Daniel Pascual, Hyuk Park, Adriano Camps, Christoph Rüdiger, Jeffrey P. Walker and Alessandra Monerris
Remote Sens. 2019, 11(9), 1120; https://doi.org/10.3390/rs11091120 - 10 May 2019
Cited by 16 | Viewed by 4004
Abstract
This work analyzes the satellite cross-talk observed by the microwave interferometric reflectometer (MIR), a new global navigation satellite system (GNSS) reflectometer, during an airborne field campaign in Victoria and New South Wales, Australia. MIR is a GNSS reflectometer with two 19-element, dual-band arrays, [...] Read more.
This work analyzes the satellite cross-talk observed by the microwave interferometric reflectometer (MIR), a new global navigation satellite system (GNSS) reflectometer, during an airborne field campaign in Victoria and New South Wales, Australia. MIR is a GNSS reflectometer with two 19-element, dual-band arrays, each of them having four steerable beams. The data collected during the experiment, the characterization of the arrays, and the global positioning system (GPS) and Galileo ephemeris were used to compute the expected delays and power levels of all incoming signals, and the probability of cross-talk was then evaluated. Despite the MIR highly directive arrays, the largest ever for a GNSS-R instrument, one of the flights was found to be contaminated by cross-talk almost half of the time at the L1/E1 frequency band, and all four flights were contaminated ∼5–10% of the time at the L5/E5a frequency band. The cross-talk introduces an error of up to 40 cm of standard deviation for altimetric applications and about 0.24 dB for scatterometric applications. Full article
(This article belongs to the Special Issue Radio Frequency Interference (RFI) in Microwave Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Microwave interferometric reflectometer (MIR) instrument and up-looking array mounted inside the airplane, and (<b>b</b>) down-looking array covered with a radome hanging from the airplane’s fuselage (with part of the fairing removed).</p>
Full article ">Figure 2
<p>Ground track of sea flight over the Bass Strait. The color scale represents the flight height above the sea level.</p>
Full article ">Figure 3
<p>Retrieved L5/E5a interferometric global navigation satellite system-reflectometry (iGNSS-R) waveform (blue), Galileo E5a tracked conventional global navigation satellite system-reflectometry (cGNSS-R) waveform (red), global positioning system (GPS) L5 interfering cGNSS-R waveform (yellow), and reconstructed contaminated waveform (purple). The data was collected during the flight over the Bass Strait.</p>
Full article ">Figure 4
<p>Skyplot of the tracked satellites in the (<b>a</b>) flight over the entrance to Port Philip Bay, (<b>b</b>) sea flight over the Bass Strait, and flights over Yanco under dry (<b>c</b>) and wet (<b>d</b>) soil conditions. The color scale indicates the angular distance to the closest satellite. Red lines highlight cross-talk from other satellites.</p>
Full article ">Figure 5
<p>Cumulative histogram of the probability of having cross-talk in the (<b>a</b>) L1/E1 and, (<b>b</b>) L5/E5a frequency bands for the different MIR flights.</p>
Full article ">Figure 6
<p>Probability density function (PDF) of the estimation error of the (<b>a</b>) maximum derivative position (DER), (<b>b</b>) half-power position (HALF), (<b>c</b>) peak position (MAX), and (<b>d</b>) peak amplitude. The “XT” PDFs show the error estimation when the cross-correlation peaks are close enough to interfere with each other, while the “NOXT” show the error estimation when the cross-correlation peaks are far enough not to interfere. The analysis distinguishes between the interfering cross-correlation peak arriving later than the tracked satellite cross-correlation peak (“first peak”), or arriving sooner (“second peak”).</p>
Full article ">
22 pages, 6599 KiB  
Article
Improving Jujube Fruit Tree Yield Estimation at the Field Scale by Assimilating a Single Landsat Remotely-Sensed LAI into the WOFOST Model
by Tiecheng Bai, Nannan Zhang, Benoit Mercatoris and Youqi Chen
Remote Sens. 2019, 11(9), 1119; https://doi.org/10.3390/rs11091119 - 10 May 2019
Cited by 17 | Viewed by 4626
Abstract
Few studies were focused on yield estimation of perennial fruit tree crops by integrating remotely-sensed information into crop models. This study presented an attempt to assimilate a single leaf area index (LAI) near to maximum vegetative development stages derived from Landsat satellite data [...] Read more.
Few studies were focused on yield estimation of perennial fruit tree crops by integrating remotely-sensed information into crop models. This study presented an attempt to assimilate a single leaf area index (LAI) near to maximum vegetative development stages derived from Landsat satellite data into a calibrated WOFOST model to predict yields for jujube fruit trees at the field scale. Field experiments were conducted in three growth seasons to calibrate input parameters for WOFOST model, with a validated phenology error of −2, −3, and −3 days for emergence, flowering, and maturity, as well as an R2 of 0.986 and RMSE of 0.624 t ha−1 for total aboveground biomass (TAGP), R2 of 0.95 and RMSE of 0.19 m2 m−2 for LAI, respectively. Normalized Difference Vegetation Index (NDVI) showed better performance for LAI estimation than a Soil-adjusted Vegetation Index (SAVI), with a better agreement (R2 = 0.79) and prediction accuracy (RMSE = 0.17 m2 m−2). The assimilation after forcing LAI improved the yield prediction accuracy compared with unassimilated simulation and remotely sensed NDVI regression method, showing a R2 of 0.62 and RMSE of 0.74 t ha−1 for 2016, and R2 of 0.59 and RMSE of 0.87 t ha−1 for 2017. This research would provide a strategy to employ remotely sensed state variables and a crop growth model to improve field-scale yield estimates for fruit tree crops. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study region and observations. Note: The LAI and TDWI sample set is a subset of yield observations.</p>
Full article ">Figure 2
<p>(<b>a</b>) A set of leaves scanned for LAI measurement. (<b>b</b>) LAI measurement by using a canopy analyzer. (<b>c</b>) Root depth and weight measurement.</p>
Full article ">Figure 3
<p>Yields (dry weight) of 181 situ sample spots in 2016 (<b>a</b>) and 2017 (<b>b</b>).</p>
Full article ">Figure 4
<p>(<b>a</b>) Planting densities for 181 samples. (<b>b</b>) Initial TDWI values without assimilation.</p>
Full article ">Figure 5
<p>Average correlation coefficient for two years between NDVI (<b>a</b>)/SAVI (<b>b</b>) and yields. Half-month 9 was the first half of May.</p>
Full article ">Figure 6
<p>(<b>a</b>) Simulated versus measured TAGP. (<b>b</b>) Simulated versus measured LAI.</p>
Full article ">Figure 7
<p>(<b>a</b>) The change trend of LAI versus TDWI. (<b>b</b>) The change trend of Yield versus TDWI.</p>
Full article ">Figure 8
<p>Calibrated and validated LAI inversion model (<b>a</b>–<b>d</b>) based on NDVI and SAVI.</p>
Full article ">Figure 9
<p>(<b>a</b>) Re-calibrated TDWI for 2016 and 2017 versus measured value. (<b>b</b>) Percent difference for the re-calibrated TDWI.</p>
Full article ">Figure 10
<p>Simulated dry weight of leaves (WLV), dry weight of stems (WST), dry weight of total aboveground biomass (TAGP), and LAI before and after forcing.</p>
Full article ">Figure 11
<p>(<b>a</b>) Relative percentage difference for prediction yields for 2016. (<b>b</b>) Relative percentage difference for prediction yields for 2017.</p>
Full article ">Figure 12
<p>(<b>a</b>) Predicted versus measured yields based on three methods for 2016. (<b>b</b>) Predicted versus measured yields based on three methods for 2017.</p>
Full article ">Figure 13
<p>Frequency distributions (%) of relative bias error (RBE; %) resulting from the comparison between observed and simulated yields. RBE % = 0% (red line) represented the perfect prediction. Bin size was equal to 5.</p>
Full article ">
16 pages, 5278 KiB  
Article
Present-Day Deformation of the Gyaring Co Fault Zone, Central Qinghai–Tibet Plateau, Determined Using Synthetic Aperture Radar Interferometry
by Yong Zhang, Chuanjin Liu, Wenting Zhang and Fengyun Jiang
Remote Sens. 2019, 11(9), 1118; https://doi.org/10.3390/rs11091118 - 10 May 2019
Cited by 10 | Viewed by 4122
Abstract
Because of the constant northward movement of the Indian plate and blockage of the Eurasian continent, the Qinghai–Tibet Plateau has been extruded by north–south compressive stresses since its formation. This has caused the plateau to escape eastward to form a large-scale east–west strike-slip [...] Read more.
Because of the constant northward movement of the Indian plate and blockage of the Eurasian continent, the Qinghai–Tibet Plateau has been extruded by north–south compressive stresses since its formation. This has caused the plateau to escape eastward to form a large-scale east–west strike-slip fault and a north–south extensional tectonic system. The Karakorum–Jiali fault, a boundary fault between the Qiangtang and Lhasa terranes, plays an important role in the regional tectonic evolution of the Qinghai–Tibet Plateau. The Gyaring Co fault, in the middle of the Karakoram–Jiali fault zone, is a prominent tectonic component. There have been cases of strong earthquakes of magnitude 7 or greater in this fault, providing a strong earthquake occurrence background. However, current seismic activity is weak. Regional geodetic observation stations are sparsely distributed; thus, the slip rate of the Gyaring Co fault remains unknown. Based on interferometric synthetic aperture radar (InSAR) technology, we acquired current high-spatial resolution crustal deformation characteristics of the Gyaring Co fault zone. The InSAR-derived deformation features were highly consistent with Global Positioning System observational results, and the accuracy of the InSAR deformation fields was within 2 mm/y. According to InSAR results, the Gyaring Co fault controlled the regional crustal deformation pattern, and the difference in far-field deformation on both sides of the fault was 3–5 mm/y (parallel to the fault). The inversion results of the back-slip dislocation model indicated that the slip rate of the Gyaring Co fault was 3–6 mm/y, and the locking depth was ~20 km. A number of v-shaped conjugate strike-slip faults, formed along the Bangong–Nujiang suture zone in the central and southern parts of the -Tibet Plateau, played an important role in regional tectonic evolution. V-shaped conjugate shear fault systems include the Gyaring Co and Doma–Nima faults, and the future seismic risk cannot be ignored. Full article
(This article belongs to the Special Issue Environmental and Geodetic Monitoring of the Tibetan Plateau)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Seismo-tectonic background map of the Gyaring Co fault zone. (<b>a</b>) The solid black lines are boundaries of the main terranes [<a href="#B20-remotesensing-11-01118" class="html-bibr">20</a>], the red circles are earthquakes of Ms 7.0 or greater since 1900, and the red square is the range of b. (<b>b</b>) The fine black lines are faults [<a href="#B21-remotesensing-11-01118" class="html-bibr">21</a>], the blue arrows are GPS horizontal velocity fields [<a href="#B22-remotesensing-11-01118" class="html-bibr">22</a>], the red boxes are coverages of InSAR deformational fields, and the black cycles are earthquakes of Ms 4.7 or greater (780 B.C. to 2018 A.D.). F1: Gyaring Co Fault Zone; F2: Gase Fault Zone; and F3: Doma–Nima Fault Zone.</p>
Full article ">Figure 2
<p>Interferogram time-spatial baseline distribution. (<b>a</b>) Track 305, (<b>b</b>) Track 33, and (<b>c</b>) Track 262. The purple crosses are the images participating in the velocity calculation, the light blue crosses are the images not participating in the velocity calculation, and the purple straight lines are the generated interferograms.</p>
Full article ">Figure 2 Cont.
<p>Interferogram time-spatial baseline distribution. (<b>a</b>) Track 305, (<b>b</b>) Track 33, and (<b>c</b>) Track 262. The purple crosses are the images participating in the velocity calculation, the light blue crosses are the images not participating in the velocity calculation, and the purple straight lines are the generated interferograms.</p>
Full article ">Figure 3
<p>Average inter-seismic deformation velocity field of the Gyaring Co fault (line-of-sight (LOS) direction). (<b>a</b>) Track 305, (<b>b</b>) Track 33, and (<b>c</b>) Track 262.</p>
Full article ">Figure 4
<p>Deformation velocity difference value statistics of the orbital overlapping areas (parallel to the fault). (<b>a</b>) Track 305–Track 33 and (<b>b</b>) Track 33–Track 262.</p>
Full article ">Figure 5
<p>Interferometric synthetic aperture radar (InSAR) inter-seismic deformation rate field of the Gyaring Co fault (parallel to the fault). GPS velocity field and fault are same as that shown in <a href="#remotesensing-11-01118-f001" class="html-fig">Figure 1</a>. The black crosses represent the location of the three profiles of aa’, bb’, and cc’, in which aa’ is shown in <a href="#remotesensing-11-01118-f006" class="html-fig">Figure 6</a>, and bb’ and cc’ are shown in <a href="#remotesensing-11-01118-f007" class="html-fig">Figure 7</a>.</p>
Full article ">Figure 6
<p>Comparison of InSAR and GPS profiles (parallel to the fault).</p>
Full article ">Figure 7
<p>Slip rate and locking depth of the Gyaring Co fault inverted via the InSAR deformation profile. (<b>a</b>) Profile bb’ and (<b>b</b>) Profile cc’. Profiles location are shown in <a href="#remotesensing-11-01118-f005" class="html-fig">Figure 5</a>.</p>
Full article ">Figure 8
<p>Distribution map of regional strain and seismic activity. (<b>a</b>) Image of main strain rate and surface strain rate; (<b>b</b>) Image of maximum shear strain rate.</p>
Full article ">Figure 9
<p>Distribution of conjugate strike-slip faults in Central Tibet (modified from Yin and Taylor, 2011 [<a href="#B35-remotesensing-11-01118" class="html-bibr">35</a>]). YGR: Yadong–Gulu rift; PXR: Pumqu–Xianza rift; TYR: Tangra Yum Co rift; XKR: Xiakangjian rift; LGR: Lunggar rift; YRR: Yari rift; PKR: Purong Kangri rift; TKR: Thakkola rift; and BNS: Bangong–Nujiang suture.</p>
Full article ">Figure 10
<p>Tectonic style model diagram of regional tectonics (modified from Yin and Taylor, 2011 [<a href="#B35-remotesensing-11-01118" class="html-bibr">35</a>]).</p>
Full article ">
19 pages, 6971 KiB  
Article
Deep Learning Based Fossil-Fuel Power Plant Monitoring in High Resolution Remote Sensing Images: A Comparative Study
by Haopeng Zhang and Qin Deng
Remote Sens. 2019, 11(9), 1117; https://doi.org/10.3390/rs11091117 - 10 May 2019
Cited by 21 | Viewed by 5406
Abstract
The frequent hazy weather with air pollution in North China has aroused wide attention in the past few years. One of the most important pollution resource is the anthropogenic emission by fossil-fuel power plants. To relieve the pollution and assist urban environment monitoring, [...] Read more.
The frequent hazy weather with air pollution in North China has aroused wide attention in the past few years. One of the most important pollution resource is the anthropogenic emission by fossil-fuel power plants. To relieve the pollution and assist urban environment monitoring, it is necessary to continuously monitor the working status of power plants. Satellite or airborne remote sensing provides high quality data for such tasks. In this paper, we design a power plant monitoring framework based on deep learning to automatically detect the power plants and determine their working status in high resolution remote sensing images (RSIs). To this end, we collected a dataset named BUAA-FFPP60 containing RSIs of over 60 fossil-fuel power plants in the Beijing-Tianjin-Hebei region in North China, which covers about 123 km 2 of an urban area. We compared eight state-of-the-art deep learning models and comprehensively analyzed their performance on accuracy, speed, and hardware cost. Experimental results illustrate that our deep learning based framework can effectively detect the fossil-fuel power plants and determine their working status with mean average precision up to 0.8273, showing good potential for urban environment monitoring. Full article
(This article belongs to the Special Issue Deep Learning Approaches for Urban Sensing Data Analytics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The framework of Faster R-CNN. RPN, region proposal network; RoI, region of interest; FC, fully connected layer; bbox, bounding box.</p>
Full article ">Figure 2
<p>The framework of FPN. FPN, feature pyramid network; RPN, region proposal network; RoI, region of interest; FC, fully connected layer; bbox, bounding box.</p>
Full article ">Figure 3
<p>The framework of R-FCN. RPN, region proposal network; RoI, region of interest; ConvNets and conv, convolutional network.</p>
Full article ">Figure 4
<p>The framework of DCN. Conv, convolutional network; RoI, region of interest; fc, fully connected layer.</p>
Full article ">Figure 5
<p>The framework of SSD. Conv, convolutional networks.</p>
Full article ">Figure 6
<p>The framework of DSSD. Conv, convolutional network.</p>
Full article ">Figure 7
<p>The framework of YOLOv3. Convs, convolutional networks.</p>
Full article ">Figure 8
<p>The framework of RetinaNet. FPN, feature pyramid network.</p>
Full article ">Figure 9
<p>RSI samples in BUAA-FFPP60 dataset. The first two rows show RSIs. The last row shows cropped targets, where the columns from left to right represent working chimneys, non-working chimneys, working condensing towers and non-working condensing towers, respectively.</p>
Full article ">Figure 9 Cont.
<p>RSI samples in BUAA-FFPP60 dataset. The first two rows show RSIs. The last row shows cropped targets, where the columns from left to right represent working chimneys, non-working chimneys, working condensing towers and non-working condensing towers, respectively.</p>
Full article ">Figure 10
<p>Changes of loss with steps during training.</p>
Full article ">Figure 10 Cont.
<p>Changes of loss with steps during training.</p>
Full article ">Figure 11
<p>Samples of four-class detection results of eight deep learning models. The white boxes represent working condensing tower, the light blue boxes represent non-working condensing tower, the blue boxes represent working chimney, and the green boxes represent non-working chimney.</p>
Full article ">
19 pages, 4033 KiB  
Article
Label Noise Cleansing with Sparse Graph for Hyperspectral Image Classification
by Qingming Leng, Haiou Yang and Junjun Jiang
Remote Sens. 2019, 11(9), 1116; https://doi.org/10.3390/rs11091116 - 10 May 2019
Cited by 13 | Viewed by 4053
Abstract
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more [...] Read more.
In a real hyperspectral image classification task, label noise inevitably exists in training samples. To deal with label noise, current methods assume that noise obeys the Gaussian distribution, which is not the real case in practice, because in most cases, we are more likely to misclassify training samples at the boundaries between different classes. In this paper, we propose a spectral–spatial sparse graph-based adaptive label propagation (SALP) algorithm to address a more practical case, where the label information is contaminated by random noise and boundary noise. Specifically, the SALP mainly includes two steps: First, a spectral–spatial sparse graph is constructed to depict the contextual correlations between pixels within the same superpixel homogeneous region, which are generated by superpixel image segmentation, and then a transfer matrix is produced to describe the transition probability between pixels. Second, after randomly splitting training pixels into “clean” and “polluted,” we iteratively propagate the label information from “clean” to “polluted” based on the transfer matrix, and the relabeling strategy for each pixel is adaptively adjusted along with its spatial position in the corresponding homogeneous region. Experimental results on two standard hyperspectral image datasets show that the proposed SALP over four major classifiers can significantly decrease the influence of noisy labels, and our method achieves better performance compared with the baselines. Full article
(This article belongs to the Special Issue Robust Multispectral/Hyperspectral Image Analysis and Classification)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The framework of the proposed SALP algorithm, which mainly includes spectral–spatial sparse graph construction and adaptive label propagation.</p>
Full article ">Figure 2
<p>The OA of NLA over four typical classifiers at different noise rate on two standard hyperspectral datasets. The results of NN, SVM, RF and ELM are labeled in red “o”, green “+”, blue “*”, and black “x”.</p>
Full article ">Figure 3
<p>The graphical illustration of spectral–spatial sparse graph. In a full graph, vertexes are densely connected with others. In a spectral–spatial sparse graph, vertexes are sparsely linked with those pixels that are located in the same homogeneous regions. The links between different homogeneous regions and some weak links marked with red arrows are removed from the full graph.</p>
Full article ">Figure 4
<p>(<b>a</b>) The gray image and (<b>b</b>) the corresponding ground truth map of Indian dataset.</p>
Full article ">Figure 5
<p>(<b>a</b>) The gray image and (<b>b</b>) the corresponding ground truth map of Salinas dataset.</p>
Full article ">Figure 6
<p>OA trend of RLPA and SALP over NN and SVM with the “random” Setting. The proposed SALP is marked by imaginary line, and the RLPA is marked by solid line. The results of NN is labeled in red “+”, and that of SVM is labeled in blue “*”.</p>
Full article ">Figure 7
<p>OA trend of RLPA and SALP over NN and SVM with the “both” setting. The proposed SALP is marked by imaginary line, and the RLPA is marked by solid line. The results of NN is labeled in red “+”, and that of SVM is labeled in blue ”*“.</p>
Full article ">Figure 8
<p>The classification maps of baselines and SALP over four classifiers on the Indian Pines dataset when <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.1. From the top row down to the last row: NN, SVM, RF, and ELM, from the first column to the last column: NLA, RLPA, and SALP. Please zoom in to see the details.</p>
Full article ">Figure 9
<p>The classification maps of baselines and SALP over four classifiers on the Indian Pines dataset when <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.5. From the top row down to the last row: NN, SVM, RF, and ELM, from the first column to the last column: NLA, RLPA, and SALP. Please zoom in to see the details.</p>
Full article ">Figure 10
<p>The classification maps of baselines and SALP over four classifiers on the Salinas dataset when (<b>a</b>) <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.1 and (<b>b</b>) <math display="inline"><semantics> <mi>ρ</mi> </semantics></math> = 0.5. From the top row down to the last row: NN, SVM, RF, and ELM, from the first column to the last column: NLA, RLPA, and SALP. Please zoom in to see the details.</p>
Full article ">Figure 11
<p>OA of KNN graph and SALP with the “both” Setting. The proposed SALP is marked by imaginary line, and KNN graph is marked by solid line. The results of NN, SVM, RF and ELM are labeled in red “o”, green “+”, blue “*”, and black “x”.</p>
Full article ">Figure 12
<p>OA trend of SALP with different proportion of boundary label noise in the “both” setting. The results of NN, SVM, RF and ELM are labeled in red “o”, green “+”, blue “*”, and black “x”.</p>
Full article ">
18 pages, 1284 KiB  
Technical Note
Pointing Accuracy of an Operational Polarimetric Weather Radar
by Michael Frech, Theodor Mammen and Bertram Lange
Remote Sens. 2019, 11(9), 1115; https://doi.org/10.3390/rs11091115 - 10 May 2019
Cited by 5 | Viewed by 4015
Abstract
Exact navigation of detected radar signals is crucial for usage of radar data in meteorological applications. The antenna pointing accuracy in azimuth and elevation of a polarimetric weather research radar depending on position of the sun is assessed using dedicated solar boxscans in [...] Read more.
Exact navigation of detected radar signals is crucial for usage of radar data in meteorological applications. The antenna pointing accuracy in azimuth and elevation of a polarimetric weather research radar depending on position of the sun is assessed using dedicated solar boxscans in a sequence of 10 min. The research radar of the German Meteorological Service (Deutscher Wetterdienst, DWD) is located at the meteorological observatory Hohenpeissenberg. It is identical to the 17 weather radars of the German weather radar network. A non-linear azimuthal variation of azimuthal pointing bias of up to 0.1 is found, which is significant as this is commonly viewed as the target pointing accuracy. This azimuthal variation can be attributed to the mechanical design of the drive train with the angle encoder. This includes the inherent backlash of the gear-drive assembly. The pointing bias estimates based on over 1000 boxscans from 26 days show a small case by case variability, which indicates that dedicated solar boxscans from one day are sufficient to characterize the pointing performance of a particular system. An azimuth and elevation range that is covered with this approach is limited and dependent on the time of the year. At Hohenpeißenberg, an azimuth range up to 50–300 was covered around summer solstice and about 90 boxscans were acquired. It is shown that the pointing bias based on solar boxscan data are consistent with results from the operational assessment of pointing bias using solar hits from operational scanning if we take into account the fact that the DWD operational scan definition has only a maximum elevation of 25 . The analysis of a full diurnal cycle of boxscans from four operational radar system shows that the azimuthal dependence of azimuth bias needs to be evaluated individually for each system. For one of the systems, the azimuthal variation of the pointing bias of about 0.2 seems related to the bull gear. A difference of the pointing bias for the horizontal and vertical polarization is an indication of beam squint and, eventually, that of a feed misalignment. Beam squint and, as such, the quality of the antenna assembly can easily be monitored with this method during the life-time of a weather radar. Full article
(This article belongs to the Special Issue Radar Polarimetry—Applications in Remote Sensing of the Atmosphere)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic sketch of the antenna-pedestal assembly on a radar tower. This cartoon provides a simplified view on different reference planes which need to be leveled so that a 360<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> rotation of the antenna is in an exact horizontal plane, independent from azimuth position.</p>
Full article ">Figure 2
<p>Schematic picture of the antenna drive system which illustrates two angle sources within the DWSR5001/SDP/CE. There are the motor encoder angles of the motor which steers the bull gear. Position commands are sent to the motor encoder. The actual angles that are used to tag each pulse are from the absolute encoder.</p>
Full article ">Figure 3
<p>Principle sketch of the two angle reading sources within the DWD radar system, namely the absolute encoder and the relative encoder of the motor. These are shown together with the respective optional communication paths which are needed to tag a received ray with a time stamp and the corresponding elevation and azimuth angle.</p>
Full article ">Figure 4
<p>Operational 2-D surface fit to solar power measurements to determine antenna misalignment and radar measured peak solar power. The red data points represent the sampled solar power at a given time during operational scanning. The 2-D surface is fitted to the solar power sample. The surface fit provides the maximum peak solar power based on radar data and its position relative to the true sun position. This approach is also applied when analyzing data from solar boxscans, where many more samples are available due to the dedicated boxscan.</p>
Full article ">Figure 5
<p>Solar beam plot from 15 June 2018, 17:22 UTC. Shown is the normalized SNRh relative to the peak SNRh.</p>
Full article ">Figure 6
<p>Azimuth pointing bias based on the solar boxscans from 3 and 15 June 2018 (upper panel). The results are shown for both the horizontal and vertical polarization H and V. Boxscans are acquired every 10 min. In addition, we show the corresponding azimuth and elevation of the sun on 3 June 2018. A very similar solar azimuth and elevation readings were present on 15 June (not shown). The corresponding pointing bias in elevation is shown in the lower panel.</p>
Full article ">Figure 7
<p>Beams quint based on the elevation and azimuth estimate for 3 and 15 June 2018.</p>
Full article ">Figure 8
<p>Azimuth pointing bias (upper panel) and elevation bias (lower panel) based on over 1300 solar boxscans taken between June and August 2018 (upper panel). Also shown are the median bias estimates and associated 1st and 3rd quartile of the distribution from non-overlapping windows of 3<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> width of sun azimuth angles.</p>
Full article ">Figure 9
<p>Azimuth pointing bias based on the solar boxscans from 3 June 2018 (upper panel). The results are shown for the horizontal polarization H only and are compared to the bias estimate based on the collected solar hits from operational scanning (constant red line; there is only one bias estimate per day). In addition a horizontal line is drawn at 25<math display="inline"><semantics> <msup> <mrow/> <mo>∘</mo> </msup> </semantics></math> elevation indicating the maximum elevation of the operational scan. The corresponding pointing bias in elevation is shown in the lower panel.</p>
Full article ">Figure 10
<p>Azimuth and elevation pointing bias of the Hohenpeißenberg radar July 2017 until September 2018. The bias estimates are based on the solar hits extracted during operational scanning. The pointing bias is computed once a day.</p>
Full article ">Figure 11
<p>Azimuth bias determined from the motor encoder (“udp”, data from 9 July 2018) and the absolute encoder (operational angle source, data from 8 July 2018). Shown is also the difference between the two biases and the difference of the two angle sources based on the backlash maintenance scan.</p>
Full article ">Figure 12
<p>Azimuth bias based on re-sampled boxscan data where only data are used if the antenna is moving in clockwise (cw) or in counter-clockwise direction (ccw). The difference between the bias estimate based on data of cw and ccw movement for a given angle source (absolute or relative encoder) defines the backlash between the given encoder and the bull gear. Shown are results of the Hohenpeissenberg radar.</p>
Full article ">Figure 13
<p>Mean a zimuth (upper panel) and elevation (lower panel) pointing bias based on full diurnal cycles of solar boxscans at 4 radar sites: Boostedt (BOO, 27 June 2018, Neuheilenbach (NHB, 24 July 2018), Hannover (HNR, 24 July 2018) and Flechtdorf (FLD, 27 June 2018). About 90 bias estimates are available for each radar site.</p>
Full article ">Figure 14
<p>Azimuth pointing bias based on the sun in comparison to the difference of the readings from the absolute and relative encoder for BOO (upper panel, data from 27 June 2018) and HNR (lower panel, see also <a href="#remotesensing-11-01115-f013" class="html-fig">Figure 13</a>, data from 25 July 2018). Those readings were gathered using a dedicated maintenance scan, which was carried out on 8 May 2018 (BOO) and on 5 December 2018 (HNR).</p>
Full article ">
18 pages, 651 KiB  
Article
Kernel Joint Sparse Representation Based on Self-Paced Learning for Hyperspectral Image Classification
by Sixiu Hu, Jiangtao Peng, Yingxiong Fu and Luoqing Li
Remote Sens. 2019, 11(9), 1114; https://doi.org/10.3390/rs11091114 - 9 May 2019
Cited by 6 | Viewed by 3038
Abstract
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to [...] Read more.
By means of joint sparse representation (JSR) and kernel representation, kernel joint sparse representation (KJSR) models can effectively model the intrinsic nonlinear relations of hyperspectral data and better exploit spatial neighborhood structure to improve the classification performance of hyperspectral images. However, due to the presence of noisy or inhomogeneous pixels around the central testing pixel in the spatial domain, the performance of KJSR is greatly affected. Motivated by the idea of self-paced learning (SPL), this paper proposes a self-paced KJSR (SPKJSR) model to adaptively learn weights and sparse coefficient vectors for different neighboring pixels in the kernel-based feature space. SPL strateges can learn a weight to indicate the difficulty of feature pixels within a spatial neighborhood. By assigning small weights for unimportant or complex pixels, the negative effect of inhomogeneous or noisy neighboring pixels can be suppressed. Hence, SPKJSR is usually much more robust. Experimental results on Indian Pines and Salinas hyperspectral data sets demonstrate that SPKJSR is much more effective than traditional JSR and KJSR models. Full article
Show Figures

Figure 1

Figure 1
<p>Sketches of sparse representation and joint sparse representation. (<b>a</b>) Sparse Representation; (<b>b</b>) Joint Sparse Representation.</p>
Full article ">Figure 2
<p>Indian Pines data set. (<b>a</b>) RGB composite image; (<b>b</b>) ground-truth map.</p>
Full article ">Figure 3
<p>Salinas data set. (<b>a</b>) RGB composite image; (<b>b</b>) ground-truth map.</p>
Full article ">Figure 4
<p>Indian Pines: (<b>a</b>) Ground-truth map; classification maps obtained by (<b>b</b>) SVM (65.78%), (<b>c</b>) SVMCK (74.94%); (<b>d</b>) SRC (67.28%); (<b>e</b>) OMP (60.89%); (<b>f</b>) JSR (70.18%); (<b>g</b>) WJSR (71.48%); (<b>h</b>) KJSR (80.11%); and (<b>i</b>) SPKJSR (<b>83.61</b>%).</p>
Full article ">Figure 5
<p>Salinas: (<b>a</b>) Ground-truth map; classification maps obtained by (<b>b</b>) SVM (90.09%); (<b>c</b>) SVMCK (93.77%); (<b>d</b>) SRC (89.16%); (<b>e</b>) OMP (84.75%); (<b>f</b>) JSR (88.37%); (<b>g</b>) WJSR (88.92%); (<b>h</b>) KJSR (95.03%); and (<b>i</b>) SPKJSR (<b>96.88</b>%).</p>
Full article ">Figure 6
<p>OA versus the number of training samples on Indian Pines.</p>
Full article ">Figure 7
<p>OA versus the number of training samples on Salinas.</p>
Full article ">Figure 8
<p>OA versus the number of iterations on Indian Pines.</p>
Full article ">Figure 9
<p>OA versus the number of iterations on Salinas.</p>
Full article ">
28 pages, 14797 KiB  
Article
Evaluation of the Performance of SM2RAIN-Derived Rainfall Products over Brazil
by Franklin Paredes-Trejo, Humberto Barbosa and Carlos A. C. dos Santos
Remote Sens. 2019, 11(9), 1113; https://doi.org/10.3390/rs11091113 - 9 May 2019
Cited by 36 | Viewed by 5203
Abstract
Microwave-based satellite soil moisture products enable an innovative way of estimating rainfall using soil moisture observations with a bottom-up approach based on the inversion of the soil water balance Equation (SM2RAIN). In this work, the SM2RAIN-CCI (SM2RAIN-ASCAT) rainfall data obtained from the inversion [...] Read more.
Microwave-based satellite soil moisture products enable an innovative way of estimating rainfall using soil moisture observations with a bottom-up approach based on the inversion of the soil water balance Equation (SM2RAIN). In this work, the SM2RAIN-CCI (SM2RAIN-ASCAT) rainfall data obtained from the inversion of the microwave-based satellite soil moisture (SM) observations derived from the European Space Agency (ESA) Climate Change Initiative (CCI) (from the Advanced SCATterometer (ASCAT) soil moisture data) were evaluated against in situ rainfall observations under different bioclimatic conditions in Brazil. The research V7 version of the Tropical Rainfall Measurement Mission Multi-satellite Precipitation Analysis (TRMM TMPA) was also used as a state-of-the-art rainfall product with an up-bottom approach. Comparisons were made at daily and 0.25° scales, during the time-span of 2007–2015. The SM2RAIN-CCI, SM2RAIN-ASCAT, and TRMM TMPA products showed relatively good Pearson correlation values (R) with the gauge-based observations, mainly in the Caatinga (CAAT) and Cerrado (CER) biomes (R median > 0.55). SM2RAIN-ASCAT largely underestimated rainfall across the country, particularly over the CAAT and CER biomes (bias median < −16.05%), while SM2RAIN-CCI is characterized by providing rainfall estimates with only a slight bias (bias median: −0.20%), and TRMM TMPA tended to overestimate the amount of rainfall (bias median: 7.82%). All products exhibited the highest values of unbiased root mean square error (ubRMSE) in winter (DJF) when heavy rainfall events tend to occur more frequently, whereas the lowest values are observed in summer (JJA) with light rainfall events. The SM2RAIN-based products showed larger contribution of systematic error components than random error components, while the opposite was observed for TRMM TMPA. In general, both SM2RAIN-based rainfall products can be effectively used for some operational purposes on a daily scale, such as water resources management and agriculture, whether the bias is previously adjusted. Full article
(This article belongs to the Special Issue Precipitation and Water Cycle Measurements Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location of the study area: (<b>a</b>) Brazil’s main biomes: AMZ, Amazônia; CER, Cerrado; MAT, Mata Atlântica; CAAT, Caatinga; PTN, Pantanal; and Pampa, PMP. (<b>b</b>) Land cover for 2015 derived from the Land Cover-Climate Change Initiative (LC-CCI) product (source: <a href="http://maps.elie.ucl.ac.be/CCI" target="_blank">http://maps.elie.ucl.ac.be/CCI</a>) [<a href="#B57-remotesensing-11-01113" class="html-bibr">57</a>]. (<b>c</b>) Brazil’s terrain elevation. Elevation based on 250-m Digital Elevation Model—Shuttle Radar Topographic Mission (DEM-SRTM) images (source: <a href="https://earthexplorer.usgs.gov" target="_blank">https://earthexplorer.usgs.gov</a>) [<a href="#B58-remotesensing-11-01113" class="html-bibr">58</a>]. (<b>d</b>) Mean annual rainfall derived from ground-based gridded rainfall dataset developed by Xavier et al. [<a href="#B59-remotesensing-11-01113" class="html-bibr">59</a>] (period: 1980–2015).</p>
Full article ">Figure 2
<p>Boxplots for the mean monthly rainfall estimated from ground-based gridded rainfall dataset developed by Xavier et al. [<a href="#B59-remotesensing-11-01113" class="html-bibr">59</a>] over the biomes: (<b>a</b>) AMZ; (<b>b</b>) MAT; (<b>c</b>) CER; (<b>d</b>) CAAT; (<b>e</b>) PTN; and (<b>f</b>) PMP during the period 1980–2015. The center line of each boxplot depicts the median value (50th percentile), and the box encompasses the 25th and 75th percentiles of the sample data. The whiskers extend from q1 − 1.5 × (q3 − q1) to q3 + 1.5 × (q3 − q1), where q1 and q3 are the 25th and 75th percentiles of the sample data, respectively.</p>
Full article ">Figure 3
<p>Flowchart of the summarized research design and method.</p>
Full article ">Figure 4
<p>Spatial distribution of the seasonal and annual climatic mean rainfall in mm/day from: (<b>a1</b>–<b>a5</b>) the GBGR dataset; (<b>b1</b>–<b>b5</b>) the SM2RAIN-CCI rainfall product; (<b>c1</b>–<b>c5</b>) the SM2RAIN-ASCAT rainfall product; (<b>d1</b>–<b>d5</b>) the TRMM TMPA rainfall product during 2007–2015. The Brazilian biomes are shown in <a href="#remotesensing-11-01113-f001" class="html-fig">Figure 1</a>a. Whited cells in the panels from b1–b5 depict gaps due to the application of a static mask used by the SM2RAIN-CCI product [<a href="#B32-remotesensing-11-01113" class="html-bibr">32</a>].</p>
Full article ">Figure 5
<p>Pearson linear correlation derived from the SM2RAIN-CCI rainfall product against the GBGR dataset, the SM2RAIN-ASCAT rainfall product against the GBGR dataset, and the TRMM TMPA rainfall product against the GBGR dataset for: (<b>a</b>,<b>b</b>,<b>c</b>) annual; (<b>d</b>,<b>e</b>,<b>f</b>) winter; (<b>g</b>,<b>h</b>,<b>i</b>) spring; (<b>j</b>,<b>k</b>,<b>l</b>) summer; and (<b>m</b>,<b>n</b>,<b>o</b>) autumn during 2007–2015. Whited cells in the panels are as per <a href="#remotesensing-11-01113-f004" class="html-fig">Figure 4</a>. For each product and season, the median value per biome is reported.</p>
Full article ">Figure 6
<p>The percent bias derived from the SM2RAIN-CCI rainfall product against the GBGR dataset, the SM2RAIN-ASCAT rainfall product against the GBGR dataset, and the TRMM TMPA rainfall product against the GBGR dataset for: (<b>a</b>,<b>b</b>,<b>c</b>) annual; (<b>d</b>,<b>e</b>,<b>f</b>) winter; (<b>g</b>,<b>h</b>,<b>i</b>) spring; (<b>j</b>,<b>k</b>,<b>l</b>) summer; and (<b>m</b>,<b>n</b>,<b>o</b>) autumn during 2007–2015. Whited cells in the panels are as per <a href="#remotesensing-11-01113-f004" class="html-fig">Figure 4</a>. For each product and season, the median value per biome is reported.</p>
Full article ">Figure 7
<p>Daily rainfall estimates from TRMM TMPA, SM2RAIN-CCI, and SM2RAIN-ASCAT products against in situ daily rainfall from the GBGR dataset located in the biomes: (<b>a</b>) AMZ; (<b>b</b>) MAT; (<b>c</b>) CER; (<b>d</b>) CAAT; (<b>e</b>) PTN; and (<b>f</b>) PMP during the period during 2007–2015. The orange line indicates 1:1 correspondence and red line gives the linear regression best fit. The BS1, BS4, BS3, BS2, BS6, and BS5 benchmark sites shown in <a href="#remotesensing-11-01113-f001" class="html-fig">Figure 1</a>a provided the in situ rainfall data for AMZ, CAAT, CER, MAT, PMP, and PTN, respectively.</p>
Full article ">Figure 8
<p>Monthly time series for: (<b>a</b>) R (dimensionless); (<b>b</b>) RMSE (mm/day); (<b>c</b>) ubRMSE (mm/day); and (<b>d</b>) B (%) derived from the SM2RAIN-ASCAT rainfall product against the GBGR dataset (red line), the SM2RAIN-CCI rainfall product against the GBGR dataset (blue line), and the TRMM TMPA rainfall product against the GBGR dataset (orange line) for all-Brazil during the period 2007–2015.</p>
Full article ">Figure 9
<p>Spatial distributions of systematic [%] and random [%] error components across Brazil from: (<b>a</b>,<b>d</b>) the SM2RAIN-CCI product; (<b>b</b>,<b>e</b>) the SM2RAIN-ASCAT product; and (<b>c</b>,<b>f</b>) the TRMM TMPA product against the GBGR dataset for 2007–2015. Whited cells for all products are as per <a href="#remotesensing-11-01113-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 10
<p>Spatial distributions of the POD [fraction] and FAR [fraction] across Brazil from: (<b>a</b>,<b>d</b>) the SM2RAIN-CCI product; (<b>b</b>,<b>e</b>) the SM2RAIN-ASCAT product; and (<b>c</b>,<b>f</b>) the TRMM TMPA product against the GBGR dataset for 2007-2015. Whited cells for all products are as per <a href="#remotesensing-11-01113-f004" class="html-fig">Figure 4</a>. POD and FAR are calculated with a threshold of 1 mm/day.</p>
Full article ">Figure 11
<p>Roebber’s performance diagram [<a href="#B78-remotesensing-11-01113" class="html-bibr">78</a>] for the SM2RAIN-ASCAT rainfall product (red circles), the SM2RAIN-CCI rainfall product (blue circles), and the TRMM TMPA rainfall product (orange circles) in the biomes: (<b>a</b>) AMZ; (<b>b</b>) CAAT; (<b>c</b>) CER; (<b>d</b>) MAT; (<b>e</b>) PMP; and (<b>f</b>) PTN during 2007–2015. Dashed lines depict BS metric (see <a href="#remotesensing-11-01113-t004" class="html-table">Table 4</a>) with labels the upper axis, whereas labeled solid contours show values of CSI (see <a href="#remotesensing-11-01113-t004" class="html-table">Table 4</a>). Circles portray the six rainfall thresholds. The smallest circle indicates the rain/no rain threshold (≤1 mm), and the largest circle indicates the threshold ≥ 20 mm.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop