[go: up one dir, main page]

Next Issue
Volume 9, June
Previous Issue
Volume 9, April
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 9, Issue 5 (May 2017) – 112 articles

Cover Story (view full-size image): Passive detection of sun-induced chlorophyll fluorescence has arisen as the most powerful remote sensing tool to quantify dynamic changes of photosynthetic activity at large vegetation scales. However, the interpretation of the fluorescence measured at these levels is still constrained by an insufficient understanding of how the vegetation structure affects the distribution patterns of this signal on top of canopy. To answer this question, we designed a novel approach that provided a pixel-based co-registration of sun-induced fluorescence images and a surface model of a sugar beet canopy, both measured at ground level with a high resolution. The results describe for the first time how spatio temporal variations of fluorescence are related to the orientation, inclination and distribution of the leaves in a canopy, in relation to the sun and observer positions. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
3629 KiB  
Article
Estimating Wheat Yield in China at the Field and District Scale from the Assimilation of Satellite Data into the Aquacrop and Simple Algorithm for Yield (SAFY) Models
by Paolo Cosmo Silvestro, Stefano Pignatti, Simone Pascucci, Hao Yang, Zhenhai Li, Guijun Yang, Wenjiang Huang and Raffaele Casa
Remote Sens. 2017, 9(5), 509; https://doi.org/10.3390/rs9050509 - 22 May 2017
Cited by 83 | Viewed by 9783
Abstract
Accurate yield estimation at the field scale is essential for the development of precision agriculture management, whereas at the district level it can provide valuable information for supply chain management. In this paper, Huan Jing (HJ) satellite HJ1A/B and Landsat 8 Operational Land [...] Read more.
Accurate yield estimation at the field scale is essential for the development of precision agriculture management, whereas at the district level it can provide valuable information for supply chain management. In this paper, Huan Jing (HJ) satellite HJ1A/B and Landsat 8 Operational Land Imager (OLI) images were employed to retrieve leaf area index (LAI) and canopy cover (CC) in the Yangling area (Central China). These variables were then assimilated into two crop models, Aquacrop and simple algorithm for yield (SAFY), in order to compare their performances and practicalities. Due to the models’ specificities and computational constraints, different assimilation methods were used. For SAFY, the ensemble Kalman filter (EnKF) was applied using LAI as the observed variable, while for Aquacrop, particle swarm optimization (PSO) was used, using canopy cover (CC). These techniques were applied and validated both at the field and at the district scale. In the field application, the lowest relative root-mean-square error (RRMSE) value of 18% was obtained using EnKF with SAFY. On a district scale, both methods were able to provide production estimates in agreement with data provided by the official statistical offices. From an operational point of view, SAFY with the EnKF method was more suitable than Aquacrop with PSO, in a data assimilation context. Full article
(This article belongs to the Special Issue Earth Observations for Precision Farming in China (EO4PFiC))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of China showing the location of the test area of Yangling (Shaanxi Province).</p>
Full article ">Figure 2
<p>Ten days mean temperatures (<b>a</b>) and rainfall (<b>b</b>) for the Yangling study site, for the wheat crop cycles (1 September to 30 June) of 2012–2013 (blue), 2013–2014 (green), 2014–2015 (red).</p>
Full article ">Figure 3
<p>Flowchart of the methodology applied, from remote sensing data to yield estimation. EnKF: ensemble Kalman filter; PSO: particle swarm optimization; LAI: leaf area index; CC: canopy cover; SAFY: simple algorithm for yield; ANN: artificial neural network; PROSAIL: PROSPECT + SAIL models [<a href="#B16-remotesensing-09-00509" class="html-bibr">16</a>].</p>
Full article ">Figure 4
<p>Validation results for the retrieval of LAI from Huan Jing satellites HJ1A, HJ1B and Landsat 8 images, using field measurements of 3 years in Yangling rural area.</p>
Full article ">Figure 5
<p>Validation results for the retrieval of canopy cover (CC) from HJ1A, HJ1B and Landsat 8 images, using field measurements of 3 years in the Yangling rural area. The field measured LAI was converted into CC using Equation (1).</p>
Full article ">Figure 6
<p>Comparison between measured and simulated wheat yield using (<b>a</b>) the EnKF method with the SAFY model after a unique general calibration for all years; (<b>b</b>) the PSO method with the Aquacrop model after a unique general calibration for all years; and (<b>c</b>) the PSO method with the Aquacrop model after specific calibrations performed for each year.</p>
Full article ">Figure 7
<p>Wheat yield map (t∙ha<b><sup>−</sup></b><sup>1</sup>) for Yangling estimated using the EnKF assimilation method with the SAFY model for 2013 (<b>a</b>) and 2014 (<b>b</b>).</p>
Full article ">Figure 8
<p>Wheat yield map (t∙ha<b><sup>−</sup></b><sup>1</sup>) for Yangling, estimated using the PSO assimilation method with the Aquacrop model for 2013 (<b>a</b>) and 2014 (<b>b</b>).</p>
Full article ">Figure 9
<p>Comparison of wheat production estimated by official statistical surveys (black), and from assimilation using EnKF with SAFY (white) and PSO with Aquacrop (grey).</p>
Full article ">
2678 KiB  
Article
Hypergraph Embedding for Spatial-Spectral Joint Feature Extraction in Hyperspectral Images
by Yubao Sun, Sujuan Wang, Qingshan Liu, Renlong Hang and Guangcan Liu
Remote Sens. 2017, 9(5), 506; https://doi.org/10.3390/rs9050506 - 22 May 2017
Cited by 30 | Viewed by 8492
Abstract
The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between [...] Read more.
The fusion of spatial and spectral information in hyperspectral images (HSIs) is useful for improving the classification accuracy. However, this approach usually results in features of higher dimension and the curse of the dimensionality problem may arise resulting from the small ratio between the number of training samples and the dimensionality of features. To ease this problem, we propose a novel algorithm for spatial-spectral feature extraction based on hypergraph embedding. Firstly, each HSI pixel is regarded as a vertex and the joint of extended morphological profiles (EMP) and spectral features is adopted as the feature associated with the vertex. A hypergraph is then constructed by the K-Nearest-Neighbor method, in which each pixel and its most K relevant pixels are linked as one hyperedge to represent the complex relationships between HSI pixels. Secondly, the hypergraph embedding model is designed to learn a low dimensional feature with the reservation of geometric structure of HSI. An adaptive hyperedge weight estimation scheme is also introduced to preserve the prominent hyperedges by the regularization constraint on the weight. Finally, the learned low-dimensional features are fed to the support vector machine (SVM) for classification. The experimental results on three benchmark hyperspectral databases are presented. They highlight the importance of spatial–spectral joint features embedding for the accurate classification of HSI data. The weight estimation is better for further improving the classification accuracy. These experimental results verify the proposed method. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flowchart of the proposed method.</p>
Full article ">Figure 2
<p>The example of graph and hypergraph (<b>a</b>) simple graph, each edge consists of only two data points; (<b>b</b>) hypergraph <b>G</b>, each hyperedge is marked by an ellipse and consists of at least two data points; (<b>c</b>) taking the seven vertices as example, <b>H</b> is the incidence matrix of <b>G</b>, whose values are usually binary.</p>
Full article ">Figure 3
<p>Indian Pines. (<b>a</b>) three-channel color composite image with bands 65, 52, 36; (<b>b</b>,<b>c</b>) ground-truth map and class labels; (<b>d</b>–<b>i</b>) classification maps of PCA, EMP, EMPSpe, SH, SSHG, SSHG*, respectively.</p>
Full article ">Figure 4
<p>Pavia university. (<b>a</b>) three-channel color composite image with bands 102, 56, 31; (<b>b</b>,<b>c</b>) ground-truth map and class labels; (<b>d</b>–<b>i</b>) classification maps of PCA, EMP, EMPSpe, SH, SSHG, SSHG*, respectively.</p>
Full article ">Figure 5
<p>Botswana. (<b>a</b>) three-channel color composite image with bands 65, 52, 36; (<b>b</b>,<b>c</b>) ground-truth map and class labels; (<b>d</b>–<b>i</b>) classification maps of PCA, EMP, EMPSpe, SH, SSHG, SSHG*, respectively.</p>
Full article ">Figure 6
<p>Effects of the number <span class="html-italic">K</span> of nearest neighbors on OA. (<b>a</b>) Indian Pines; (<b>b</b>) Pavia University; (<b>c</b>) Botswana.</p>
Full article ">Figure 7
<p>Effects on the reduced dimensions. (<b>a</b>) Indian Pines; (<b>b</b>) Pavia University; (<b>c</b>) Botswana.</p>
Full article ">
19595 KiB  
Article
3D Imaging of Greenhouse Plants with an Inexpensive Binocular Stereo Vision System
by Dawei Li, Lihong Xu, Xue-song Tang, Shaoyuan Sun, Xin Cai and Peng Zhang
Remote Sens. 2017, 9(5), 508; https://doi.org/10.3390/rs9050508 - 21 May 2017
Cited by 46 | Viewed by 12399
Abstract
Nowadays, 3D imaging of plants not only contributes to monitoring and managing plant growth, but is also becoming an essential part of high-throughput plant phenotyping. In this paper, an inexpensive (less than 70 USD) and portable platform with binocular stereo vision is established, [...] Read more.
Nowadays, 3D imaging of plants not only contributes to monitoring and managing plant growth, but is also becoming an essential part of high-throughput plant phenotyping. In this paper, an inexpensive (less than 70 USD) and portable platform with binocular stereo vision is established, which can be controlled by a laptop. In the stereo matching step, an efficient cost calculating measure—AD-Census—is integrated with the adaptive support-weight (ASW) approach to improve the ASW’s performance on real plant images. In the quantitative assessment, our stereo algorithm reaches an average error rate of 6.63% on the Middlebury datasets, which is lower than the error rates of the original ASW approach and several other popular algorithms. The imaging experiments using the proposed stereo system are carried out in three different environments including an indoor lab, an open field with grass, and a multi-span glass greenhouse. Six types of greenhouse plants are used in experiments; half of them are ornamentals and the others are greenhouse crops. The imaging accuracy of the proposed method at different baseline settings is investigated, and the results show that the optimal length of the baseline (distance between the two cameras of the stereo system) is around 80 mm for reaching a good trade-off between the depth accuracy and the mismatch rate for a plant that is placed within 1 m of the cameras. Error analysis from both theoretical and experimental sides show that for an object that is approximately 800 mm away from the stereo platform, the measured depth error of a single point is no higher than 5 mm, which is tolerable considering the dimensions of greenhouse plants. By applying disparity refinement, the proposed methodology generates dense and accurate point clouds of crops in different environments including an indoor lab, an outdoor field, and a greenhouse. Our approach also shows invariance against changing illumination in a real greenhouse, as well as the capability of recovering 3D surfaces of highlighted leaf regions. The method not only works on a binocular stereo system, but is also potentially applicable to a SFM-MVS (structure-from-motion and multiple-view stereo) system or any multi-view imaging system that uses stereo matching. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The established low-cost, portable binocular stereo vision system.</p>
Full article ">Figure 2
<p>Six types of greenhouse plants were used in experiments. (<b>a</b>) <span class="html-italic">Epipremnum aureum</span>; (<b>b</b>) <span class="html-italic">Aglaonema modestum</span>; (<b>c</b>) pepper plant; (<b>d</b>) <span class="html-italic">Monstera deliciosa</span>; (<b>e</b>) greenhouse strawberry plants; and (<b>f</b>) greenhouse turnip plants.</p>
Full article ">Figure 3
<p>The 3D imaging experiments for sample plants were carried out in three environments: (<b>a</b>) an indoor lab of Donghua University, (<b>b</b>) an open field in Donghua University, and (<b>c</b>) a multi-span glass greenhouse in Tongji University.</p>
Full article ">Figure 4
<p>A plot of epipolar geometry of a binocular stereo vision system.</p>
Full article ">Figure 5
<p>Working principle of the proposed binocular stereo vision system. The procedure of imaging consists of four steps—(i) camera calibration, (ii) stereo rectification, (iii) stereo matching and (iv) 3D point cloud reconstruction. The left column of this figure (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>) shows the methodology and equipment used in the four steps, and the right column (<b>b,d,f,h</b>) shows intermediate and final results step by step. <a href="#remotesensing-09-00508-f005" class="html-fig">Figure 5</a>a shows the stereo vision platform that contains two webcams and an image pair of a chessboard used for calibration. (<b>b</b>) Records of the spatial positions of the camera system and the chessboard during calibration, realized by using the Camera Calibration Toolbox for Matlab [<a href="#B43-remotesensing-09-00508" class="html-bibr">43</a>]. Stereo rectification aligns the epipolar lines of the left and the right cameras and reduces the camera distortion near the image boundaries, and after rectification, the correspondence search in stereo matching can be reduced from 2D to 1D. (<b>c</b>) The stereo rectification corresponds to paralleling the principal axis of a camera to the other. (<b>d</b>) After the rectification, the camera distortion around image boundaries is reduced. (<b>e</b>) Disparity is formed by two different image planes; (<b>f</b>) the disparity map generated by using stereo matching algorithms. (<b>g</b>) The mapping between the disparity map to the real 3D space by triangulation; (<b>h</b>) the final 3D point cloud.</p>
Full article ">Figure 6
<p>Calibration using a two-side chessboard. (<b>a</b>,<b>b</b>) are two images of one side with 8 × 8 grids. (<b>c</b>,<b>d</b>) are two images of the other side with 10 × 10 grids.</p>
Full article ">Figure 7
<p>A rectified image pair of a greenhouse ornamental plant.</p>
Full article ">Figure 8
<p>The census measure for two pixels.</p>
Full article ">Figure 9
<p>Experimental results on the Middlebury datasets [<a href="#B48-remotesensing-09-00508" class="html-bibr">48</a>]. (<b>a</b>): ground truth images. (<b>b</b>–<b>e</b>): disparity images of proposed method, original ASW [<a href="#B39-remotesensing-09-00508" class="html-bibr">39</a>], GC [<a href="#B37-remotesensing-09-00508" class="html-bibr">37</a>], and SGBM [<a href="#B38-remotesensing-09-00508" class="html-bibr">38</a>], respectively. The algorithm can be considered a good one if it has a result similar to the ground truth. The proposed results are superior to the others.</p>
Full article ">Figure 10
<p>Disparity images on real plant images: (<b>a</b>) real plant images; (<b>b</b>) disparity images generated by original ASW; (<b>c</b>) disparity images generated by the proposed method.</p>
Full article ">Figure 11
<p>Disparity maps and point clouds generated under four different baseline lengths: 45 mm, 65 mm, 85 mm, and 105 mm; the plant is placed about 1.0 meter away from the stereo system. (<b>a</b>) Disparity maps; (<b>b</b>) 3D point clouds viewed from the top of the plant; (<b>c</b>) ide-view point clouds, where each horizontal line stands for a depth layer; (<b>d</b>) the point clouds viewed from another viewpoint.</p>
Full article ">Figure 12
<p>Comparison of 3D point clouds of <span class="html-italic">Epipremnum aureum</span> obtained with and without disparity refinement: (<b>a</b>) the point cloud without disparity refinement; (<b>b</b>) the point cloud with disparity refinement.</p>
Full article ">Figure 13
<p>The stereo-rectified left image and reconstructed 3D point cloud of the <span class="html-italic">Epipremnum aureum</span> sample plant. (<b>a</b>) The rectified left webcam image used for generating the 3D point cloud; (<b>b</b>,<b>c</b>) the reconstructed 3D point cloud of the <span class="html-italic">Epipremnum aureum</span> sample plant viewed from two different positions.</p>
Full article ">Figure 14
<p>The stereo-rectified left image and reconstructed 3D point cloud of the pepper sample plant. The left side (<b>a</b>) is the rectified left webcam image used for generating the 3D point cloud; (<b>b</b>) and (<b>c</b>) show the reconstructed 3D point cloud of the pepper sample plant viewed from two different positions.</p>
Full article ">Figure 15
<p>The stereo-rectified left image and reconstructed 3D point cloud of a <span class="html-italic">Monstera deliciosa</span> sample plant. The left side (<b>a</b>) is the rectified left webcam image used for generating the 3D point cloud; (<b>b</b>,<b>c</b>) show the reconstructed 3D point cloud of the <span class="html-italic">Monstera deliciosa</span> sample plant viewed from two different positions.</p>
Full article ">Figure 16
<p>The stereo-rectified left image and reconstructed 3D point cloud of greenhouse strawberry samples. The left side (<b>a</b>) is the rectified left webcam image used for generating the 3D point cloud; (<b>b</b>,<b>c</b>) show the reconstructed 3D point cloud of the strawberry sample plants viewed from two different positions.</p>
Full article ">Figure 17
<p>The stereo-rectified left image and reconstructed 3D point cloud of greenhouse turnip samples. The left side (<b>a</b>) is the rectified left webcam image used for generating the 3D point cloud; (<b>b</b>,<b>c</b>) display the reconstructed 3D point cloud of the turnip sample plants viewed from two different positions.</p>
Full article ">Figure 18
<p>A textured box was placed at different distances to the stereo platform for depth error measurements. The black number below each sub-image means the real distance from the box to the camera, and the red number below each box image means the measured depth error. The baseline of this test is fixed at 85 mm. It is noted that the absolute values of measured depth errors are also plotted as square data labels on the blue curve in <a href="#remotesensing-09-00508-f019" class="html-fig">Figure 19</a>.</p>
Full article ">Figure 19
<p>Measured depth errors and the theoretical upper bounds in our stereo vision system. Each measured depth error is smaller than the corresponding upper bound value estimated by Equation (21).</p>
Full article ">Figure 20
<p>Comparison of results in overcast and sunny weather for greenhouse strawberry plants: (<b>a</b>) the image captured when it is sunny; (<b>b</b>) the disparity image of (<b>a</b>) obtained via our ASW stereo matching algorithm with AD-Census cost measure; (<b>c</b>) the top view of the generated point cloud with disparity refinement on image (<b>b</b>); (<b>d</b>) the side view of the point cloud (<b>c</b>), from which we can observe the leaves distributed on different layers in height; (<b>e</b>) the image captured in overcast weather; (<b>f</b>) the disparity image of (<b>e</b>); (<b>g</b>) the top view of a generated point cloud with disparity refinement on image (<b>f</b>); (<b>h</b>) the side view of the point cloud (<b>g</b>), whose structure is almost the same as (<b>d</b>).</p>
Full article ">Figure 21
<p>Comparison of results in overcast and sunny weather for greenhouse turnip plants: (<b>a</b>) the image captured when it is overcast; (<b>b</b>) the disparity image of (<b>a</b>) obtained via our ASW stereo matching algorithm with AD-Census cost measure; (<b>c</b>) the top view of the generated point cloud with disparity refinement on image (<b>b</b>); (<b>d</b>) the side view of point cloud (<b>c</b>); (<b>e</b>) the image captured when sunny; (<b>f</b>) the disparity image of (<b>e</b>); (<b>g</b>)is the top view of the generated point cloud with disparity refinement on image (<b>f</b>); (<b>h</b>) the side view of the point cloud (<b>g</b>), with a structure almost the same as (<b>d</b>).</p>
Full article ">Figure 22
<p>Outdoor experiment on imaging a <span class="html-italic">Monstera deliciosa</span> sample plant: (<b>a</b>) image of the captured image pair, where the highlighted regions are labeled by red ellipses; (<b>b</b>) the disparity image of (<b>a</b>) obtained via our ASW stereo matching algorithm with the AD-Census cost measure. The disparity image exhibits invariance against highlight because there is no abrupt intensity change inside the highlight regions in (<b>b</b>). The two views of point cloud are shown in (<b>c</b>,<b>d</b>). The regions without highlight are mostly rugged, coinciding with the fact that highlight only comes from smooth and flat surfaces.</p>
Full article ">Figure 23
<p>Automatic leaf segmentation for the point cloud of the <span class="html-italic">Epipremnum aureum</span> sample plant. The point cloud contains the canopy structure only, and is obtained by using a green color filter on the original point cloud (<a href="#remotesensing-09-00508-f013" class="html-fig">Figure 13</a>) generated by our method. (<b>a</b>,<b>b</b>) The side view and top view of the plant, respectively; (<b>c</b>,<b>d</b>) the segmentation results corresponding to (<b>a</b>,<b>b</b>), respectively. Different leaves are painted with different colors, and the points that are believed to belong to the same leaf are painted with the same color. The segmentation shows satisfactory performance, recognizing 35 of the 37 true leaves.</p>
Full article ">
5933 KiB  
Review
Annual and Seasonal Glacier-Wide Surface Mass Balance Quantified from Changes in Glacier Surface State: A Review on Existing Methods Using Optical Satellite Imagery
by Antoine Rabatel, Pascal Sirguey, Vanessa Drolon, Philippe Maisongrande, Yves Arnaud, Etienne Berthier, Lucas Davaze, Jean-Pierre Dedieu and Marie Dumont
Remote Sens. 2017, 9(5), 507; https://doi.org/10.3390/rs9050507 - 20 May 2017
Cited by 29 | Viewed by 9983
Abstract
Glaciers are one of the terrestrial essential climate variables (ECVs) as they respond very sensitively to climate change. A key driver of their response is the glacier surface mass balance that is typically derived from field measurements. It deserves to be quantified over [...] Read more.
Glaciers are one of the terrestrial essential climate variables (ECVs) as they respond very sensitively to climate change. A key driver of their response is the glacier surface mass balance that is typically derived from field measurements. It deserves to be quantified over long time scales to better understand the accumulation and ablation processes at the glacier surface and their relationships with inter-annual changes in meteorological conditions and long-term climate changes. Glaciers with in situ monitoring of surface mass balance are scarce at the global scale, and satellite remote sensing provides a powerful tool to increase the number of monitored glaciers. In this study, we present a review of three optical remote sensing methods developed to quantify seasonal and annual glacier surface mass balances. These methodologies rely on the multitemporal monitoring of the end-of-summer snow line for the equilibrium-line altitude (ELA) method, the annual cycle of glacier surface albedo for the albedo method and the mapping of the regional snow cover at the seasonal scale for the snow-map method. Together with a presentation of each method, an application is illustrated. The ELA method shows promising results to quantify annual surface mass balance and to reconstruct multi-decadal time series. The other two methods currently need a calibration on the basis of existing in situ data; however, a generalization of these methods (without calibration) could be achieved. The two latter methods show satisfying results at the annual and seasonal scales, particularly for the summer surface mass balance in the case of the albedo method and for the winter surface mass balance in the case of the snow-map method. The limits of each method (e.g., cloud coverage, debris-covered glaciers, monsoon-regime and cold glaciers), their complementarities and the future challenges (e.g., automating of the satellite images processing, generalization of the methods needing calibration) are also discussed. Full article
(This article belongs to the Special Issue Remote Sensing of Glaciers)
Show Figures

Figure 1

Figure 1
<p>Examples of the snow line identification (in yellow) on the Glacier d’Argentière. The images were acquired on 18 August 2002 by Landsat-7 (left) and 9 September 2013 by Landsat-8 (right). These illustrations use a spectral band combination involving the bands #2 (0.52–0.60 μm), #4 (0.77–0.90 μm) and #5 (1.55–1.75 μm) for Landsat-7 and the bands #3 (0.53–0.59 μm), #5 (0.85–0.88 μm) and #6 (1.57–1.65 μm) for Landsat-8. Fifty-meter elevation contour lines are shown in grey.</p>
Full article ">Figure 2
<p>Cumulative glacier-wide mass balance time series for 30 glaciers in the French Alps (1983–2014), quantified from the end-of-summer snow line. The black curve on the graph is the average of the 30 glaciers, and the grey area shows the ± 1 standard deviation interval. Numbers in brackets after the glacier name refer to the map on the right. Adapted from Rabatel et al. [<a href="#B9-remotesensing-09-00507" class="html-bibr">9</a>].</p>
Full article ">Figure 3
<p>Relationships between (<b>A</b>) annual and (<b>B</b>) summer surface mass balance (SMB) and minimum glacier-wide albedo calculated across 26 MODIS pixels on Brewster Glacier (from [<a href="#B42-remotesensing-09-00507" class="html-bibr">42</a>]).</p>
Full article ">Figure 4
<p>(<b>A</b>) Park Pass Glacier with footprint of MODIS 250-m resolution pixels used to retrieve glacier surface albedo; (<b>B</b>) average seasonal albedo cycle of Park Pass Glacier for the 2000–2015 period. Retrieval of albedo is compromised during winter months as the glacier is almost fully in the shade at the time of MODIS/Terra acquisition.</p>
Full article ">Figure 5
<p>(<b>A</b>) Evolution of <math display="inline"> <semantics> <mrow> <msup> <mover accent="true"> <mi>α</mi> <mo stretchy="false">¯</mo> </mover> <mrow> <mi>min</mi> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>S</mi> <mi>L</mi> <msub> <mi>A</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> from the snow line aerial survey program for Park Pass glacier over the 2000–2015 period; (<b>B</b>) comparative relationship between <math display="inline"> <semantics> <mrow> <msup> <mover accent="true"> <mi>α</mi> <mo stretchy="false">¯</mo> </mover> <mrow> <mi>min</mi> </mrow> </msup> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <mi>S</mi> <mi>L</mi> <msub> <mi>A</mi> <mi>i</mi> </msub> </mrow> </semantics> </math> for Park Pass and Brewster glaciers.</p>
Full article ">Figure 6
<p>(<b>A</b>) Hypsometric curve of Brewster Glacier; (<b>B</b>) comparison between the accumulation area ratio (AAR) estimated from a linear spectral mixture model of glacier-wide surface albedo and reference values from reanalysis of in situ glaciological observations; (<b>C</b>) comparison between mass balance obtained with the gradient-based albedo method and estimated from the re-analysed glaciological method (2005–2013, [<a href="#B38-remotesensing-09-00507" class="html-bibr">38</a>]) and calibrated albedo method (2000–2004, [<a href="#B42-remotesensing-09-00507" class="html-bibr">42</a>]).</p>
Full article ">Figure 7
<p>Temporal variations of winter mass balance and cumulative winter albedo for Brewster Glacier.</p>
Full article ">Figure 8
<p>Example of regional seasonal snow maps over the European Alps for two contrasted hydrological years produced by averaging all normalized difference snow index (NDSI) syntheses included between 1 October and 30 April for the winter season and between 1 May and 30 September for the summer season.</p>
Full article ">Figure 9
<p>Example of altitudinal distribution of NDSI for each year since 1998. The NDSI was averaged in a square window centred on Griesgletscher, central Swiss Alps. The red horizontal line represents the NDSI value from which the mean regional snow altitude <span class="html-italic">Z</span> (represented by the red vertical line) is inferred for each year. (<b>a</b>) Winter NDSI over 1999–2014; (<b>b</b>) summer NDSI over 1998–2014. From Drolon et al. [<a href="#B51-remotesensing-09-00507" class="html-bibr">51</a>].</p>
Full article ">Figure 10
<p>Observed (<b>a</b>) winter and (<b>b</b>) summer SMB of Griesgletscher, central Swiss Alps, as a function of the mean regional snow altitude Z for each year of the calibration period represented by coloured dots. Dashed thin lines represent the 95% confidence intervals for linear regression (solid line). From Drolon et al. [<a href="#B51-remotesensing-09-00507" class="html-bibr">51</a>].</p>
Full article ">Figure 11
<p>Time series of observed SMB (red) and VGT SMB (blue) over the period of 1998–2008, averaged for the 55 glaciers. The pink curves represent the ±1 standard deviation for all glaciers. (<b>a</b>) Winter SMB; (<b>b</b>) summer SMB. From Drolon et al. [<a href="#B51-remotesensing-09-00507" class="html-bibr">51</a>].</p>
Full article ">
3736 KiB  
Article
A Machine Learning Method for Co-Registration and Individual Tree Matching of Forest Inventory and Airborne Laser Scanning Data
by Sebastian Lamprecht, Andreas Hill, Johannes Stoffels and Thomas Udelhoven
Remote Sens. 2017, 9(5), 505; https://doi.org/10.3390/rs9050505 - 19 May 2017
Cited by 11 | Viewed by 8181
Abstract
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve [...] Read more.
Determining the exact position of a forest inventory plot—and hence the position of the sampled trees—is often hampered by a poor Global Navigation Satellite System (GNSS) signal quality beneath the forest canopy. Inaccurate geo-references hamper the performance of models that aim to retrieve useful information from spatially high remote sensing data (e.g., species classification or timber volume estimation). This restriction is even more severe on the level of individual trees. The objective of this study was to develop a post-processing strategy to improve the positional accuracy of GNSS-measured sample-plot centers and to develop a method to automatically match trees within a terrestrial sample plot to aerial detected trees. We propose a new method which uses a random forest classifier to estimate the matching probability of each terrestrial-reference and aerial detected tree pair, which gives the opportunity to assess the reliability of the results. We investigated 133 sample plots of the Third German National Forest Inventory (BWI, 2011–2012) within the German federal state of Rhineland-Palatinate. For training and objective validation, synthetic forest stands have been modeled using the Waldplaner 2.0 software. Our method has achieved an overall accuracy of 82.7% for co-registration and 89.1% for tree matching. With our method, 60% of the investigated plots could be successfully relocated. The probabilities provided by the algorithm are an objective indicator of the reliability of a specific result which could be incorporated into quantitative models to increase the performance of forest attribute estimations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and German National Forest Inventory (<span class="html-italic">Bundeswaldinventur</span>, BWI) sampling design. Background: Web Map Service (WMS) of forest types provided by Copernicus [<a href="#B35-remotesensing-09-00505" class="html-bibr">35</a>].</p>
Full article ">Figure 2
<p>Study design. For each inventory plot, 100 simulations are generated which serve for algorithm training and validation. Finally, the algorithm is applied to the original inventory plots. ALS: airborne laser scanning.</p>
Full article ">Figure 3
<p>(<b>a</b>) Height grids of the original airborne laser scanning (ALS) point cloud and (<b>b</b>) a corresponding synthetic point cloud with 1 m resolution. The given stand is characterized by predominant oaks, some tall spruces and young beeches. The median height of the detected trees is about 24 m.</p>
Full article ">Figure 4
<p>(<b>a</b>) Flowchart of the matching probability estimation for a given potential tree pair <math display="inline"> <semantics> <mrow> <mo>(</mo> <mi>s</mi> <mo>,</mo> <mi>d</mi> <mo>)</mo> </mrow> </semantics> </math> and (<b>b</b>) corresponding schematic illustration of the feature vector calculation. <math display="inline"> <semantics> <mrow> <mi>R</mi> <mi>F</mi> </mrow> </semantics> </math>: random forest classifier.</p>
Full article ">Figure 5
<p>Correlation between predicted probability and observed probability for (<b>a</b>) tree matching and (<b>b</b>) co-registration.</p>
Full article ">Figure 6
<p>Effect of (<b>a</b>) the vertical displacement of the plot center; (<b>b</b>) the distance between the first point pair; (<b>c</b>) the correlation coefficient for tree height; and (<b>d</b>) the NND for surveyed trees on the matching probability (using just one simulation for each plot). The numbers in brackets correspond to the number of tree pairs. The whiskers extend to ten times the interquartile range. Outliers are marked by circles.</p>
Full article ">Figure 7
<p>Feature importance derived by the random forest classifier.</p>
Full article ">Figure 8
<p>Effect (<b>a</b>) of the number of linked trees and (<b>b</b>) of the horizontal displacement of the plot center on the co-registration probability. The numbers in brackets correspond to the number of plots. The whiskers extend to 1.5 times the interquartile range. Outliers are marked by circles.</p>
Full article ">Figure 9
<p>Effect of (<b>a</b>) the dominant tree species; (<b>b</b>) the number of tree species; (<b>c</b>) the height of the tallest tree; and (<b>d</b>) the variability of the tree heights on the probability of a correct co-registration. The numbers in brackets correspond to the number of plots. The whiskers extend to 1.5 times the interquartile range. Outliers are marked by circles.</p>
Full article ">Figure 10
<p>(<b>a</b>) Correlation between the applied probability threshold for a correct co-registration and the resulting number plots classified as correctly co-registered. (<b>b</b>) Effect of the year of survey on the co-registration probability. Both figures are based on the original BWI plots. The numbers in brackets correspond to the number of plots. GNSS: Global Navigation Satellite System.</p>
Full article ">Figure 11
<p>Effect of the number of detections per hectare on the matching probability using the simulated datasets. The numbers in brackets correspond to the number of plots. The whiskers extend to ten times the interquartile range. Outliers are marked by circles.</p>
Full article ">
8368 KiB  
Article
Assessing Re-Composition of Xing’an Larch in Boreal Forests after the 1987 Fire, Northeast China
by Junjie Wang, Cuizhen Wang and Shuying Zang
Remote Sens. 2017, 9(5), 504; https://doi.org/10.3390/rs9050504 - 19 May 2017
Cited by 17 | Viewed by 5101
Abstract
Xing’an larch, a deciduous coniferous species, is the zonal tree of the Greater Xing’an Mountains in Northeast China. In May 1987, a catastrophic fire broke out in the mountains and burned 1.3 million hectares of forests in 26 days. While studies have shown [...] Read more.
Xing’an larch, a deciduous coniferous species, is the zonal tree of the Greater Xing’an Mountains in Northeast China. In May 1987, a catastrophic fire broke out in the mountains and burned 1.3 million hectares of forests in 26 days. While studies have shown that forest greenness has come back to normal in certain years, the re-composition of this zonal species has not been studied after the 1987 fire. With a series of Landsat 8 OLI images acquired in 2013–2015, this study builds the Normalized Difference Vegetation Index (NDVI) and Green Vegetation Index (GVI) time series in a complete growing cycle. A decision tree is developed to classify tree species with an overall accuracy of 86.16% and Kappa coefficient of 0.80. The re-composition of Xing’an larch after the 1987 fire is extracted, and its variations in areas under different fire intensities are statistically analyzed. Results show that Xing’an larch comprises 17.52%, 26.20% and 33.19% of forests in burned areas with high, medium and low fire intensities, respectively. Even around 30 years after the 1987 fire, the composition of this zonal species in boreal forest has not been fully recovered in the Greater Xing’an Mountains. The Xing’an larch map extracted in this study could serve as base information for ecological and environmental studies in this south end of the boreal Eurasia. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area and two example images in standard false color: Landsat 5 TM image in 1987 (<b>a</b>) and Landsat 8 OLI image in 2015 (<b>b</b>).</p>
Full article ">Figure 2
<p>The NFRI unit polygons with pure stands and field survey points for the three tree species in the study area. Boundaries of unit polygons are also displayed.</p>
Full article ">Figure 3
<p>Trajectories of tree species and non-forest covers for NDVI (<b>a</b>) and GVI (<b>b</b>). Error bars are marked as ± standard error at each point.</p>
Full article ">Figure 4
<p>A 5-node CART decision tree for the three tree species. The thresholds are summarized from the CART outputs of the training samples.</p>
Full article ">Figure 5
<p>Distributions of forest/non-forest (<b>a</b>) and tree species (<b>b</b>) in the study area.</p>
Full article ">Figure 6
<p>The NBR-extracted fire intensity map. A cutline is manually drawn to generally separate the burned areas in the north and unburned areas in the south of the study area.</p>
Full article ">Figure 7
<p>Tree species composition in forests under different fire intensities.</p>
Full article ">
4573 KiB  
Article
Evaluation of Error in IMERG Precipitation Estimates under Different Topographic Conditions and Temporal Scales over Mexico
by Yandy G. Mayor, Iryna Tereshchenko, Mariam Fonseca-Hernández, Diego A. Pantoja and Jorge M. Montes
Remote Sens. 2017, 9(5), 503; https://doi.org/10.3390/rs9050503 - 19 May 2017
Cited by 60 | Viewed by 6606
Abstract
This study evaluates the precipitation product of the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) over the Mexican region during the period between April 2014 and October 2015 using three different time scales for cumulative precipitation (hourly, daily and seasonal). Also, the [...] Read more.
This study evaluates the precipitation product of the Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) over the Mexican region during the period between April 2014 and October 2015 using three different time scales for cumulative precipitation (hourly, daily and seasonal). Also, the IMERG data have been analyzed as a function of elevation given the rain gauges from the automatic meteorological stations network, located within the area of study, which are used as a reference. In the present study, continuous and categorical statistics are used to evaluate IMERG. It was found that IMERG showed better performance at the daily and seasonal time scale resolutions. While hourly precipitation estimates reached a mean correlation coefficient of 0.35, the daily and seasonal precipitation estimates achieved correlations over 0.51. In addition, the IMERG precipitation product was able to reproduce the diurnal and daily cycles of the average precipitation with a trend towards overestimating rain gauges. However, extreme precipitation events were highly underestimated, as shown by relative biases of ?61% and ?46% for the hourly and daily precipitation analysis, respectively. It was also found that IMERG tends to improve precipitation detection and to decrease magnitude errors over the higher terrain elevations of Mexico. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The area of study is enclosed in the red polygon (<b>a</b>) and automatic meteorological stations shown in black dots (<b>b</b>). Note the irregularity shown in the topography (color in m). In yellow <span class="html-italic">SMW</span> shows the location of the Sierra Madre Occidental, <span class="html-italic">SME</span> stands for Sierra Madre Oriental and <span class="html-italic">Alt</span> is the Mexican Altiplano.</p>
Full article ">Figure 2
<p>Average hourly rainfall (mm/h) for Integrated Multi-satellitE Retrievals for Global Precipitation Measurement (IMERG) (red) and Gauge (black) data for the period 1 April 2014 to 31 October 2015. Hourly average statistics results are shown. Pearson correlation coefficient (<span class="html-italic">COR</span>): blue; bias (<span class="html-italic">BIAS</span>) (mm/h): yellow; and root-mean-square error (<span class="html-italic">RMSE</span>) (mm/h): green, for the same period.</p>
Full article ">Figure 3
<p>(<b>a</b>) Correlation (<span class="html-italic">COR</span>) map; (<b>b</b>) scatter plot for hourly precipitation events and (<b>c</b>) correlation as a function of meteorological stations elevations (red line); the black line shows the trend and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge hourly data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 4
<p>(<b>a</b>) Relative bias (<span class="html-italic">RBIAS</span>) map; (<b>b</b>) root-mean-square error (<span class="html-italic">RMSE</span>) map; (<b>c</b>) and (<b>d</b>) are the <span class="html-italic">RBIAS</span> and <span class="html-italic">RMSE</span>, respectively, shown as a function of meteorological stations elevations (red lines); the black lines show the trends and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge hourly data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 5
<p>(<b>a</b>) False alarm ratio (<span class="html-italic">FAR</span>) map; (<b>b</b>) probability of detection (<span class="html-italic">POD</span>) map; (<b>e</b>) critical success index (<span class="html-italic">CSI</span>); (<b>f</b>) accuracy (<span class="html-italic">ACC</span>) map; and (<b>c</b>), (<b>d</b>), (<b>g</b>) and (<b>h</b>) are the <span class="html-italic">FAR</span>, <span class="html-italic">POD</span>, <span class="html-italic">CSI</span> and <span class="html-italic">ACC</span>, respectively shown as a function of meteorological stations elevations (red lines); the black lines show the trends and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge hourly data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 6
<p>Average daily rainfall (mm/day) for IMERG (thin pink) and gauge (thin gray) data from 1 April 2014 to 31 October 2015. Thick red and black lines resulted from smoothing raw IMERG and gauge data respectively. The <span class="html-italic">COR</span>, <span class="html-italic">BIAS</span> and <span class="html-italic">RMSE</span> values shown in the figure were calculated for these data.</p>
Full article ">Figure 7
<p>Absolute errors as a function of meteorological stations elevation. Absolute errors were calculated using the IMERG and gauge daily data from 1 April 2014 to 31 October 2015. Lines show the absolute errors for precipitation intensities ranging between 0 and 10 mm/day (cyan), 10 and 25 mm/day (blue) and over 25 mm/day (green).</p>
Full article ">Figure 8
<p>(<b>a</b>) Correlation (<span class="html-italic">COR</span>) map; (<b>b</b>) scatter plot for daily precipitation events and (<b>c</b>) correlation as a function of meteorological stations elevations (red line); the black line shows the trend and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge daily data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 9
<p>(<b>a</b>) Relative bias (<span class="html-italic">RBIAS</span>) map; (<b>b</b>) Root mean squared error (<span class="html-italic">RMSE</span>) map; (<b>c</b>) and (<b>d</b>) are <span class="html-italic">RBIAS</span> and <span class="html-italic">RMSE</span>, respectively, shown as a function of meteorological stations elevations (red lines); the black lines show the trends and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge daily data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 10
<p>(<b>a</b>) False alarm ratio (<span class="html-italic">FAR</span>) map; (<b>b</b>) probability of detection (<span class="html-italic">POD</span>) map; (<b>e</b>) critical success index (<span class="html-italic">CSI</span>) map; (<b>f</b>) accuracy (<span class="html-italic">ACC</span>) map; and (<b>c</b>), (<b>d</b>), (<b>g</b>) and (<b>h</b>) are <span class="html-italic">FAR</span>, <span class="html-italic">POD</span>, <span class="html-italic">CSI</span> and <span class="html-italic">ACC</span>, respectively shown as a function of meteorological stations elevations (red lines); the black lines show the trends and the elevation of the meteorological stations is represented by the gray, shaded area. IMERG estimates and gauge daily data from 1 April 2014 to 31 October 2015 have been used for these calculations.</p>
Full article ">Figure 11
<p>Average seasonal precipitation (mm) for the dry and wet seasons and for IMERG (red bars) and gauge (black bars) data for the period from 1 April 2014 to 31 October 2015. The average is calculated for all meteorological stations and the dry season is considered to be from November to April.</p>
Full article ">
4382 KiB  
Article
Characteristics of Evapotranspiration of Urban Lawns in a Sub-Tropical Megacity and Its Measurement by the ‘Three Temperature Model + Infrared Remote Sensing’ Method
by Guoyu Qiu, Shenglin Tan, Yue Wang, Xiaohui Yu and Chunhua Yan
Remote Sens. 2017, 9(5), 502; https://doi.org/10.3390/rs9050502 - 19 May 2017
Cited by 42 | Viewed by 5585
Abstract
Evapotranspiration (ET) is one of the most important factors in urban water and energy regimes. Because of the extremely high spatial heterogeneity of urban area, accurately measuring ET using conventional methods remains a challenge due to their fetch requirements and low spatial resolution. [...] Read more.
Evapotranspiration (ET) is one of the most important factors in urban water and energy regimes. Because of the extremely high spatial heterogeneity of urban area, accurately measuring ET using conventional methods remains a challenge due to their fetch requirements and low spatial resolution. The goals of this study were to investigate the characteristics of urban ET and its main influencing factors and subsequently to improve a fetch-free, high spatial resolution method for urban ET estimation. The Bowen ratio and the ‘three-temperature model (3T model) + infrared remote sensing (RS)’ methods were used for these purposes. The results of this study are listed in the following lines. (1) Urban ET is mainly affected by solar radiation and the effects of air humidity, wind velocity, and air temperature are very weak; (2) The average daily, monthly, and annual ETs of the urban lawn are 2.70, 60–100, and 990 mm, respectively, which are obvious compared with other landscapes; (3) The ratio of ET to precipitation is 0.65 in the wet season and 2.6 in the dry season, indicating that most of the precipitation is evaporated; (4) The fetch-free approach of ‘3T model + infrared RS’ is verified to be an accurate method for measuring urban ET and it agrees well with the Bowen ratio method (R2 is over 0.93 and the root mean square error is less than 0.04 mm h?1); (5) The spatial heterogeneity of urban ET can also be accurately estimated by the proposed approach. These results are helpful for improving the accuracy of ET estimation in urban areas and are useful for urban water and environmental planning and management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. The upper left figure shows the location of Shenzhen city in China. The upper right figure shows the location of experimental field in Shenzhen. The bottom left figure shows the experimental field and the location of Bowen ratio system (scale = 1:12,500). The photo at bottom right is the Bowen ratio system tower and the lawn field.</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>f</b>) Daily mean value of solar radiation (<span class="html-italic">R<sub>s</sub></span>), net radiation (<span class="html-italic">R<sub>n</sub></span>), photosynthetic active radiation (PAR), air temperature, relative humidity, precipitation, and wind velocity over the period (from 11 July 2014 to 30 September 2015) in the experimental site.</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>–<b>f</b>) Daily mean value of solar radiation (<span class="html-italic">R<sub>s</sub></span>), net radiation (<span class="html-italic">R<sub>n</sub></span>), photosynthetic active radiation (PAR), air temperature, relative humidity, precipitation, and wind velocity over the period (from 11 July 2014 to 30 September 2015) in the experimental site.</p>
Full article ">Figure 3
<p>Daily (<b>a</b>) and monthly (<b>b</b>) average ET of the urban lawn from 11 July 2014 to 30 September 2015. Data were measured using the Bowen ratio tower.</p>
Full article ">Figure 4
<p>Diurnal variations of lawn ET on typical sunny days in the four seasons in Shenzhen. 15 August 2014, 13 October 2014, 16 January 2015, and 16 April 2015 represent summer, fall, winter, and spring, respectively.</p>
Full article ">Figure 5
<p>Diurnal variations of seasonal average ET of lawn in the four seasons.</p>
Full article ">Figure 6
<p>Visible light image (<b>a</b>), temperature image (<b>b</b>), and ET of a lawn (<b>c</b>) at 12:00 on 15 July 2014. ET was estimated by infrared RS + 3T model method. Images were taken almost vertically from a tower approximately 50 m high.</p>
Full article ">Figure 7
<p>(<b>a</b>–<b>c</b>) Comparison of the lawn ET estimated using the ‘infrared RS + 3T model’ and Bowen ratio method.</p>
Full article ">Figure 8
<p>Ratio of ET to precipitation in different months over a one-year period.</p>
Full article ">
1713 KiB  
Article
Modulation Model of High Frequency Band Radar Backscatter by the Internal Wave Based on the Third-Order Statistics
by Pengzhen Chen, Lei Liu, Xiaoqing Wang, Jinsong Chong, Xin Zhang and Xiangzhen Yu
Remote Sens. 2017, 9(5), 501; https://doi.org/10.3390/rs9050501 - 19 May 2017
Cited by 6 | Viewed by 5804
Abstract
Modulation model of radar backscatters is an important topic in the remote sensing of oceanic internal wave by synthetic aperture radar (SAR). Previous studies related with the modulation models were analyzed mainly based on the hypothesis that ocean surface waves are Gaussian distributed. [...] Read more.
Modulation model of radar backscatters is an important topic in the remote sensing of oceanic internal wave by synthetic aperture radar (SAR). Previous studies related with the modulation models were analyzed mainly based on the hypothesis that ocean surface waves are Gaussian distributed. However, this is not always true for the complicated ocean environment. Research has showed that the measurements are usually larger than the values predicted by modulation models for the high frequency radars (X-band and above). In this paper, a new modulation model was proposed which takes the third-order statistics of the ocean surface into account. It takes the situation into consideration that the surface waves are Non-Gaussian distributed under some conditions. The model can explain the discrepancy between the measurements and the values calculated by the traditional models in theory. Furthermore, it can accurately predict the modulation for the higher frequency band. The model was verified by the experimental measurements recorded in a wind wave tank. Further discussion was made about applicability of this model that it performs better in the prediction of radar backscatter modulation compared with the traditional modulation model for the high frequency band radar or under lager wind speeds. Full article
(This article belongs to the Special Issue Ocean Remote Sensing with Synthetic Aperture Radar)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic side view of the experimental wind-wave tank.</p>
Full article ">Figure 2
<p>Experimental devices: (<b>a</b>) radar system; (<b>b</b>) CCD array.</p>
Full article ">Figure 3
<p>Doppler spectrum at the wind speed 4 m/s (<b>a</b>) X band; (<b>b</b>) Ka band.</p>
Full article ">Figure 4
<p>Surface wave recorded by CCD: (<b>a</b>) wave height; (<b>b</b>) wave spectra at different wind speeds.</p>
Full article ">Figure 5
<p>Comparison between measurements of radar and values calculated by models in theory. 10 m wind speed 5.2 m/s: (<b>a</b>) IEM2 Model vs. Radar: X-band; (<b>b</b>) IEM2 Model vs. Radar: Ka-band; (<b>c</b>) Bispectrum vs. Radar: X-band; (<b>d</b>) Bispectrum vs. Radar: Ka-band; (<b>e</b>) IEM3 Model vs. Radar: X-band; (<b>f</b>) IEM3 Model vs. Radar: Ka-band.</p>
Full article ">Figure 6
<p>Comparison between measurements of radar and values calculated by models in theory, 10 m wind speed 6.9 m/s; (<b>a</b>) IEM2 Model vs. Radar: X-band; (<b>b</b>) IEM2 Model vs. Radar: Ka-band; (<b>c</b>) Bispectrum vs. Radar: X-band; (<b>d</b>) Bispectrum vs. Radar: Ka-band; (<b>e</b>) IEM3 Model vs. Radar: X-band; (<b>f</b>) IEM3 Model vs. Radar: Ka-band.</p>
Full article ">Figure 7
<p>Modulation depth of radar backscatters as a function of wind speed: (<b>a</b>) X-band; (<b>b</b>) Ka-band.</p>
Full article ">
42450 KiB  
Article
Learning Dual Multi-Scale Manifold Ranking for Semantic Segmentation of High-Resolution Images
by Mi Zhang, Xiangyun Hu, Like Zhao, Ye Lv, Min Luo and Shiyan Pang
Remote Sens. 2017, 9(5), 500; https://doi.org/10.3390/rs9050500 - 19 May 2017
Cited by 50 | Viewed by 11102
Abstract
Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or [...] Read more.
Semantic image segmentation has recently witnessed considerable progress by training deep convolutional neural networks (CNNs). The core issue of this technique is the limited capacity of CNNs to depict visual objects. Existing approaches tend to utilize approximate inference in a discrete domain or additional aides and do not have a global optimum guarantee. We propose the use of the multi-label manifold ranking (MR) method in solving the linear objective energy function in a continuous domain to delineate visual objects and solve these problems. We present a novel embedded single stream optimization method based on the MR model to avoid approximations without sacrificing expressive power. In addition, we propose a novel network, which we refer to as dual multi-scale manifold ranking (DMSMR) network, that combines the dilated, multi-scale strategies with the single stream MR optimization method in the deep learning architecture to further improve the performance. Experiments on high resolution images, including close-range and remote sensing datasets, demonstrate that the proposed approach can achieve competitive accuracy without additional aides in an end-to-end manner. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Figure 1

Figure 1
<p>Dual multi-scale manifold ranking (<b>DMSMR</b>) network overview. For each dilated convolutional layer, a non-dilated convolution layer is applied following the pooling layer in each scale. The dilated and non-dilated convolution layers form a dual layer, in which the corresponding layers are optimized with the embedded feedforward single stream manifold ranking network. The scale factor is implicitly represented by the pooling layer in each block. <a href="#remotesensing-09-00500-f002" class="html-fig">Figure 2</a> illustrates how to embed the manifold ranking optimization method into the single stream network (marked with orange color in this figure). The optimized outputs of each scale, that is, <math display="inline"> <semantics> <msub> <mover accent="true"> <mi mathvariant="bold">F</mi> <mo>^</mo> </mover> <mi>l</mi> </msub> </semantics> </math> generated in each scale, are combined by Equation (<a href="#FD17-remotesensing-09-00500" class="html-disp-formula">17</a>).</p>
Full article ">Figure 2
<p>The embedded feedforward single stream manifold ranking optimization network. The output of the convolutional features that upsample to full image resolution for each class, such as road, sky and building, within the CamVid dataset [<a href="#B68-remotesensing-09-00500" class="html-bibr">68</a>,<a href="#B69-remotesensing-09-00500" class="html-bibr">69</a>] depicted in the figure, serves as the initial manifold ranking score <math display="inline"> <semantics> <msup> <mover accent="true"> <mi mathvariant="bold">F</mi> <mo>˜</mo> </mover> <mo>*</mo> </msup> </semantics> </math> to be optimized. By applying the feedforward MR inference with the contextual information extracted from the input image, the optimal MR score <math display="inline"> <semantics> <mover accent="true"> <mi mathvariant="bold">F</mi> <mo>^</mo> </mover> </semantics> </math> of each class can be obtained by Equation (<a href="#FD10-remotesensing-09-00500" class="html-disp-formula">10</a>). The only requirement for the proposed network is the multi-label neighborhood relationship, which is designed for constructing the Laplacian matrix <math display="inline"> <semantics> <mover accent="true"> <mi mathvariant="bold">L</mi> <mo>˜</mo> </mover> </semantics> </math> in a single stream rather than the unary and pairwise streams presented in [<a href="#B26-remotesensing-09-00500" class="html-bibr">26</a>,<a href="#B29-remotesensing-09-00500" class="html-bibr">29</a>].</p>
Full article ">Figure 3
<p>Several semantic segmentation results on PASCAL VOC 2012 validation images. <b>DMSMR</b>: Semantic segmentation result predicted by dual multi-scale manifold ranking network. <b>GT</b>: Ground Truth.</p>
Full article ">Figure 4
<p>Semantic segmentation results on CamVid images. <b>DMSMR</b>: Semantic segmentation result predicted by dual multi-scale manifold ranking network (<b>DMSMR</b>). <b>GT</b>: Ground Truth.</p>
Full article ">Figure 5
<p>Accuracy analysis with respect to boundary on CamVid dataset. (<b>a</b>) Trimap visualization on CamVid dataset. Top-left: source image. Top-right: ground truth. Bottom-left: trimap with one pixel band width. Bottom-right: trimap with three pixels band width. (<b>b</b>) Pixel mIoU with respect to band width around object boundaries. We measure the relationship of our model before and after employing the multi-scale (<b>MS</b>), dilated convolution (<b>Dilated</b>), single stream Manifold Ranking (<b>MR-Opti</b>) and joint strategies (<b>DMSMR</b>).</p>
Full article ">Figure 6
<p>Visualization of the comparative results on a few Vaihingen testing imagery (tile numbers 2, 4, 6 and 8). For each image, we generate the dense prediction results and corresponding error maps (red/green image) with different approaches.</p>
Full article ">Figure 7
<p>Semantic segmentation results with different strategies on the EvLab-SS validation patches. Four kinds of image patches with different spatial resolutions and illuminations are depicted in the figure. The first and second rows are the GeoEye and World-View 2 satellite images with resample GSD of 0.5 m and 0.2 m. The third and the last rows are the aerial images with resample GSD of 0.25 m and 0.1 m, respectively. <b>MS</b>: Predictions with multi-scale approach. <b>MR-Opti</b>: Semantic segmentation results using manifold ranking optimization method. <b>DMSMR</b>: Segmentation result predicted by dual multi-scale manifold ranking network. <b>GT</b>: Ground Truth.</p>
Full article ">Figure 8
<p>Accuracy analysis with respect to boundary on EvLab-SS dataset. (<b>a</b>) Visualization of trimap for EvLab-SS dataset. Top-left: source patch. Top-right: ground truth. Bottom-left: trimap with one pixel band width. Bottom-right: trimap with three pixels band width. (<b>b</b>) Pixel mIoU with respect to band width around object boundaries. We measure the relationship for our model before and after employing the multi-scale (<b>MS</b>), dilated convolution (<b>Dilated</b>), single stream Manifold Ranking (<b>MR-Opti</b>) and joint strategies (<b>DMSMR</b>) on the EvLab-SS dataset.</p>
Full article ">Figure 9
<p>The architectures of the networks with different strategies: (<b>a</b>) Convolutional networks before employing the strategies (<b>Before</b>); (<b>b</b>) Networks using multi-scale strategy (<b>MS</b>); (<b>c</b>) Networks using dilated method (<b>Dilated</b>); (<b>d</b>) Networks using manifold ranking optimization (<b>MR-Opti</b>).</p>
Full article ">Figure 9 Cont.
<p>The architectures of the networks with different strategies: (<b>a</b>) Convolutional networks before employing the strategies (<b>Before</b>); (<b>b</b>) Networks using multi-scale strategy (<b>MS</b>); (<b>c</b>) Networks using dilated method (<b>Dilated</b>); (<b>d</b>) Networks using manifold ranking optimization (<b>MR-Opti</b>).</p>
Full article ">
35673 KiB  
Article
Impacts of Thermal Time on Land Surface Phenology in Urban Areas
by Cole Krehbiel, Xiaoyang Zhang and Geoffrey M. Henebry
Remote Sens. 2017, 9(5), 499; https://doi.org/10.3390/rs9050499 - 18 May 2017
Cited by 23 | Viewed by 8230
Abstract
Urban areas alter local atmospheric conditions by modifying surface albedo and consequently the surface radiation and energy balances, releasing waste heat from anthropogenic uses, and increasing atmospheric aerosols, all of which combine to increase temperatures in cities, especially overnight, compared with surrounding rural [...] Read more.
Urban areas alter local atmospheric conditions by modifying surface albedo and consequently the surface radiation and energy balances, releasing waste heat from anthropogenic uses, and increasing atmospheric aerosols, all of which combine to increase temperatures in cities, especially overnight, compared with surrounding rural areas, resulting in a phenomenon called the “urban heat island” effect. Recent rapid urbanization of the planet has generated calls for remote sensing research related to the impacts of urban areas and urbanization on the natural environment. Spatially extensive, high spatial resolution data products are needed to capture phenological patterns in regions with heterogeneous land cover and external drivers such as cities, which are comprised of a mixture of land cover/land uses and experience microclimatic influences. Here we use the 30 m normalized difference vegetation index (NDVI) product from the Web-Enabled Landsat Data (WELD) project to analyze the impacts of urban areas and their surface heat islands on the seasonal development of the vegetated land surface along an urban–rural gradient for 19 cities located in the Upper Midwest of the United States. We fit NDVI observations from 2003–2012 as a quadratic function of thermal time as accumulated growing degree-days (AGDD) calculated from the Moderate-resolution Imaging Spectroradiometer (MODIS) 1 km land surface temperature product to model decadal land surface phenology metrics at 30 m spatial resolution. In general, duration of growing season (measured in AGDD) in green core areas is equivalent to duration of growing season in urban extent areas, but significantly longer than duration of growing season in areas outside of the urban extent. We found an exponential relationship in the difference of duration of growing season between urban and surrounding rural areas as a function of distance from urban core areas for perennial vegetation, with an average magnitude of 669 AGDD (base 0 °C) and the influence of urban areas extending greater than 11 km from urban core areas. At the regional scale, relative change in duration of growing season does not appear to be significantly related to total area of urban extent, population, or latitude. The distance and magnitude that urban areas exert influence on vegetation in and near cities is relatively uniform. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) 2011 National Land Cover Database Land Cover Type over the Upper Midwest region of the United States showing the 19 selected study cities in purple and corresponding region of interest in cyan. (<b>b</b>) MODIS Land Surface Temperature-derived decadal (2003–2012) mean annual accumulated growing degree-days (AGDD) over the Upper Midwest showing the southwest (shades of red; higher AGDD) to northeast (shades of blue; lower AGDD) gradient of thermal time in the region. Additional information on the urban areas can be found in <a href="#remotesensing-09-00499-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 2
<p>(<b>a</b>) Example of land cover type (LCT) classification (derived from the 2011 National Land Cover Database LCT product) over Sioux Falls, SD. “Water”, “Barren”, and “Change” pixels (white) were excluded from the analyses; (<b>b</b>) Example of the four urban spatial subregions used in the analysis including: urban extent (UE), urban core areas (UCAs), green core areas (GCAs), and areas outside of the UE over Sioux Falls, SD.</p>
Full article ">Figure 3
<p>Processing outline for MODIS Land Surface Temperature (LST) to accumulated growing degree-days (AGDD) algorithm that converts MODIS LST 8-day composites into annual time series of AGDD (adapted from [<a href="#B40-remotesensing-09-00499" class="html-bibr">40</a>]).</p>
Full article ">Figure 4
<p>Quadratic land surface phenology model fit to the 2003–2012 time series of Web-Enabled Landsat Data Normalized Difference Vegetation Index NDVI vs. MODIS Land Surface Temperature-derived accumulated growing degree-days for an example of perennial forest (green) and annual cropland (orange) pixels selected from Omaha, NE. In grey are the phenometrics derived from the model.</p>
Full article ">Figure 5
<p>Example of exponential trend model fit to change in duration of growing season (ΔDGS<sub>AGDD</sub>) as a function of distance from nearest urban core area for Omaha-Council Bluffs, NE-IA. The grey diamonds show where the exponential model reaches 95% of asymptotic values, used to calculate the magnitude of ΔDGS<sub>AGDD</sub> and the distance at which urban effects become insignificant. In blue is the model fit to strictly perennial vegetation land cover types and in green annual croplands are included.</p>
Full article ">Figure 6
<p>Results from equivalence tests between group means of duration of growing season (DGS<sub>AGDD</sub>). DGS<sub>AGDD</sub> is equivalent between green core areas (green) and urban extent (UE) areas (tan), but significantly lower in areas outside of the UE (brown) for 17 of 19 cities.</p>
Full article ">Figure 7
<p>Duration of growing season (DGS<sub>AGDD</sub>) for nine study cities within the greater Minneapolis-St. Paul, MN-WI region. Water is masked (blue) and pixels with quadratic land surface phenology model fit &lt;0.5 are in black.</p>
Full article ">Figure 8
<p>Land cover type (LCT) classification scheme for nine study cities within the greater Minneapolis-St. Paul, MN-WI region demonstrating regional differences in dominant LCT between the intensely cultivated regions in the southwest (brown) and increasingly forest/herbaceous LCTs (green/yellow) to the north and east, with the large metropolitan area of Minneapolis-St. Paul (grey) lying between the two regions.</p>
Full article ">Figure 9
<p>Exponential trend model fit to difference in duration of growing season (ΔDGS<sub>AGDD</sub>) as a function of distance from nearest UCA for four selected cities. Differences in ΔDGS<sub>AGDD</sub> calculated with croplands (green) and without croplands (blue) are evident, particularly in the predominantly agricultural areas surrounding Omaha-Council Bluffs, NE-IA, and Des Moines, IA, compared to rural Rochester, MN, and Minneapolis-St. Paul, MN-WI, where forests and herbaceous land covers are more widely distributed. The grey diamonds show where the exponential model reaches 95% of asymptotic values, used to calculate the magnitude of ΔDGS<sub>AGDD</sub> and the distance at which urban effects become insignificant.</p>
Full article ">Figure 10
<p>Difference in duration of growing season (ΔDGS<sub>AGDD</sub>) in terms of: (<b>a</b>) accumulated growing degree-days (AGDD); (<b>b</b>) calendar days; and (<b>c</b>) percentage of mean DGS<sub>AGDD</sub> for results from model fit with (orange) and without (blue) croplands. Notice that ΔDGS is significantly related to latitude in terms of: total AGDD (<b>a</b>); but not relative (%) ΔDGS (<b>c</b>).</p>
Full article ">Figure 11
<p>Examples of linear regression model fit to PH<sub>NDVI</sub> vs. Half-TTP<sub>NDVI</sub> for four selected cities. Note the large variation in Half-TTP<sub>NDVI</sub> for croplands (yellow) and positive linear relationships seen in the three perennial vegetation land cover types.</p>
Full article ">
10169 KiB  
Article
Classification for High Resolution Remote Sensing Imagery Using a Fully Convolutional Network
by Gang Fu, Changjun Liu, Rong Zhou, Tao Sun and Qijian Zhang
Remote Sens. 2017, 9(5), 498; https://doi.org/10.3390/rs9050498 - 18 May 2017
Cited by 320 | Viewed by 21238
Abstract
As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN [...] Read more.
As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation. In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN model is proposed. Firstly, we improve the density of output class maps by introducing Atrous convolution, and secondly, we design a multi-scale network architecture by adding a skip-layer structure to make it capable for multi-resolution image classification. Finally, we further refine the output class map using Conditional Random Fields (CRFs) post-processing. Our classification model is trained on 70 GF-2 true color images, and tested on the other 4 GF-2 images and 3 IKONOS true color images. We also employ object-oriented classification, patch-based CNN classification, and the FCN-8s approach on the same images for comparison. The experiments show that compared with the existing approaches, our approach has an obvious improvement in accuracy. The average precision, recall, and Kappa coefficient of our approach are 0.81, 0.78, and 0.83, respectively. The experiments also prove that our approach has strong applicability for multi-resolution image classification. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The general pipeline of our approach: The training stage and the classification stage are illustrated in the upper and lower parts, respectively.</p>
Full article ">Figure 2
<p>Network architectures for standard Convolutional Neural Network (CNN) and Fully Convolutional Network (FCN). (<b>a</b>) Architecture of standard CNN: stacks of convolutional-pooling layers and fully connected (FC) layers. Given an image, the distribution over classes is predicted. The class with the largest distribution value is considered as the class of a given image; (<b>b</b>) Architecture of FCN: FC layers are replaced by convolutional layers. FCN maintains the 2-D structure of the image.</p>
Full article ">Figure 3
<p>“Atrous” convolutions with <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo> </mo> <mn>2</mn> <mo>,</mo> <mi>and</mi> <mo> </mo> <mn>3</mn> </mrow> </semantics> </math>. The first convolution (<math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics> </math>) is actually the ordinary convolution.</p>
Full article ">Figure 4
<p>Illustration of atrous convolution for dense feature map generation. Red route: standard convolution performed on a low resolution feature map. Blue route: dense feature map generated using atrous convolution with rate <math display="inline"> <semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics> </math> on a high resolution input feature map.</p>
Full article ">Figure 5
<p>Multi-scale network architecture.</p>
Full article ">Figure 6
<p>Three sample examples for our classification training. (<b>a</b>) Original images; (<b>b</b>) Ground truth (GT) labels corresponding to the images in (<b>a</b>).</p>
Full article ">Figure 7
<p>General procedure of network training.</p>
Full article ">Figure 8
<p>Softmax function performed on the output feature map.</p>
Full article ">Figure 9
<p>General procedure of image classification using the trained network.</p>
Full article ">Figure 10
<p>General procedure of our patch-based CNN classification experiment.</p>
Full article ">Figure 11
<p>Classification results on GF-2 images (Experiment A). (<b>a</b>) Original images; (<b>b</b>) GT labels corresponding to the images in (<b>a</b>); (<b>c</b>–<b>e</b>) Results of the MR-SVM object-oriented classification, patch-based CNN classification, and FCN-8s classification corresponding to the images in (<b>a</b>), respectively; (<b>f</b>) Our classification results corresponding to the images in (<b>a</b>).</p>
Full article ">Figure 12
<p>Classification result on IKONOS images (Experiment B). (<b>a</b>) Original images; (<b>b</b>) GT labels corresponding to the images in (<b>a</b>); (<b>c</b>–<b>e</b>) Results of the MR-SVM object-oriented classification, patch-based CNN classification, and FCN-8s classification corresponding to the images in (<b>a</b>), respectively; (<b>f</b>) Our classification results corresponding to the images in (<b>a</b>).</p>
Full article ">Figure 13
<p>Incorrect image object generated by MR segmentation. (<b>a</b>) Original images; (<b>b</b>) GT labels corresponding to the images in (<b>a</b>); (<b>c</b>) Incorrect image object covers both the building and cement ground (with yellow boundary).</p>
Full article ">Figure 14
<p>Heat map for the building generated by patch-based CNN and our approach. (<b>a</b>) Original images; (<b>b</b>) Heat map generated by patch-based CNN classification using <math display="inline"> <semantics> <mrow> <mn>128</mn> <mo>×</mo> <mn>128</mn> </mrow> </semantics> </math> patches; (<b>c</b>) Heat map generated by the FCN model.</p>
Full article ">Figure 15
<p>Detail comparison between FCN-8s and our approach. (<b>a</b>) Original images; (<b>b</b>) Classification result from FCN-8s; (<b>c</b>) Classification result from our approach.</p>
Full article ">
7138 KiB  
Article
Multi-Decadal Surface Water Dynamics in North American Tundra
by Mark L. Carroll and Tatiana V. Loboda
Remote Sens. 2017, 9(5), 497; https://doi.org/10.3390/rs9050497 - 18 May 2017
Cited by 44 | Viewed by 7705
Abstract
Over the last several decades, warming in the Arctic has outpaced the already impressive increases in global mean temperatures. The impact of these increases in temperature has been observed in a multitude of ecological changes in North American tundra including changes in vegetative [...] Read more.
Over the last several decades, warming in the Arctic has outpaced the already impressive increases in global mean temperatures. The impact of these increases in temperature has been observed in a multitude of ecological changes in North American tundra including changes in vegetative cover, depth of active layer, and surface water extent. The low topographic relief and continuous permafrost create an ideal environment for the formation of small water bodies—a definitive feature of tundra surface. In this study, water bodies in Nunavut territory in northern Canada were mapped using a long-term record of remotely sensed observations at 30 m spatial resolution from the Landsat suite of instruments. The temporal trajectories of water extent between 1985 and 2015 were assessed. Over 675,000 water bodies have been identified over the 31-year study period with over 168,000 showing a significant (p < 0.05) trend in surface area. Approximately 55% of water bodies with a significant trend were increasing in size while the remaining 45% were decreasing in size. The overall net trend for water bodies with a significant trend is 0.009 ha year?1 per water body. Full article
(This article belongs to the Special Issue Remote Sensing of Arctic Tundra)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in North American Arctic region, north central Nunavut territory Canada. The study area is located primarily in the Southern Arctic ecoregion and is characterized by low topography and numerous small to moderate sized water bodies. The Queen Maud Gulf Bird sanctuary is indicated with the red polygon.</p>
Full article ">Figure 2
<p>Algorithm flow for the generation of annual water maps from the individual dates of DSWE (reproduced from [<a href="#B28-remotesensing-09-00497" class="html-bibr">28</a>]). In Panel (<b>a</b>) (left most panel), individual scenes are converted from four classes to two (land and water) then summed to get total observations of land and total observations of water for the period. Panel (<b>b</b>) shows the “total water” for an individual path/row. Panel (<b>c</b>) shows the mosaicked non-overlapping path/rows which are then summed. Panel (<b>d</b>) shows the final “total water” for the full region of interest.</p>
Full article ">Figure 3
<p>Distribution of WorldView-2 (WV2) scenes used in the accuracy assessment of the annual water maps. Footprints of WV2 scenes are shown as dark grey rectangles distributed throughout the image.</p>
Full article ">Figure 4
<p>In the images above water is shown in black and land is shown in light grey. The first five images show a lake complex (small unnamed lake in northern Nunavut) in five individual years. The final image shows the master map which is the maximum extent in the whole 31-year record. Through time you see the lake shrinks and splits into several components. The master map allows all of these components to be related to the same water body even when they split off into individual pieces. The years shown here are chosen as representative examples of the 31-year record.</p>
Full article ">Figure 5
<p>Total annual area of surface water between 1985 and 2015. Red circles denote local temporal maxima and the minimum in the record.</p>
Full article ">Figure 6
<p>Difference in extent for several lakes in the study region. The lighter colors indicate that water was present in early years of the study but not present in later years.</p>
Full article ">Figure 7
<p>Variability of river extent and flow from 1985–2015. Lighter colors indicate that water was present in some years but not in all years. This is particularly noticeable in the edges of the river and in the islands in the middle of the channel.</p>
Full article ">Figure 8
<p>Spatial distribution of water bodies with a significant (<span class="html-italic">p</span> &lt; 0.05) trend in surface water area (ha/year) over the 31-year time period of the study. Water bodies that are increasing in size are shown in green while the ones that are decreasing in size are shown in red.</p>
Full article ">
6875 KiB  
Technical Note
Performance of MODIS C6 Aerosol Product during Frequent Haze-Fog Events: A Case Study of Beijing
by Wei Chen, Aiping Fan and Lei Yan
Remote Sens. 2017, 9(5), 496; https://doi.org/10.3390/rs9050496 - 18 May 2017
Cited by 19 | Viewed by 5825
Abstract
The newly released MODIS Collection 6 aerosol products have been widely used to evaluate fine particulate matter with a 10 km Dark Target aerosol optic depth (DT AOD) product, a new 3 km DT AOD product and an enhanced Deep Blue (DB) AOD [...] Read more.
The newly released MODIS Collection 6 aerosol products have been widely used to evaluate fine particulate matter with a 10 km Dark Target aerosol optic depth (DT AOD) product, a new 3 km DT AOD product and an enhanced Deep Blue (DB) AOD product. However, the representativeness of MODIS AOD products under different air quality conditions remains unclear. In this study, we obtained all three types of MODIS Terra AOD from 2001 to 2015 and Aqua AOD from 2003 to 2015 for the Beijing region to study the performance of the different AOD products (Collection 6) under different air quality situations. The validation of three MODIS AOD products suggests that DB AOD has the highest accuracy with an expected error (EE) envelope (containing at least 67% of the matchups on a scatter plot) of 0.05 + 0.15?, followed by 10 km DT AOD (0.08 + 0.2?) and 3 km DT AOD (0.35 + 0.15?), specifically for Beijing. Near-surface PM2.5 concentrations during the passage of MODIS from 2013 to 2015 were also obtained to categorize air quality as unpolluted, moderately, and heavily polluted, as well as to analyze the performance of the different AOD products under different air quality conditions. Very few MODIS 3 km DT retrievals appeared on heavily polluted days, making it almost impossible to play an effective role in air quality applications in Beijing. While the DB AOD allowed for considerable retrievals under all air quality conditions, it had a coarse spatial resolution. These results demonstrate that the MODIS 3 km DT AOD product may not be the appropriate proxy to be used in the satellite retrieval of surface PM2.5, especially for those areas with frequent haze-fog events like Beijing. Full article
(This article belongs to the Special Issue Remote Sensing of Atmospheric Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Distributions of the two Aerosol RObotic NETwork (AERONET) stations and two air quality stations used in this study. The two AERONET stations are: Beijing-CAMS station (39.93°N, 116.32°E) and Beijing station (39.97°N, 116.38°E). The two air quality stations are Olympic Sports Center station (39.98°N, 116.40°E) and West Park Official station (39.93°N, 116.34°E).</p>
Full article ">Figure 2
<p>Validation of the Moderate Resolution Imaging Spectroradiometer (MODIS; both Terra and Aqua) 10 km Dark Target aerosol optical depth (DT AOD) against AERONET measurements at the Beijing station and Beijing-CAMS station. EE: expected error.</p>
Full article ">Figure 3
<p>Validation of the MODIS (both Terra and Aqua) 10 km Deep Blue (DB) AOD against AERONET measurements at the Beijing station, and Beijing-CAMS station.</p>
Full article ">Figure 4
<p>Validation of the MODIS (both Terra and Aqua) 3 km Dark Target AOD against AERONET measurements at the Beijing station, and Beijing-CAMS station.</p>
Full article ">Figure 5
<p>Monthly successfully retrieval counts of the MODIS 10 km Dark Target AOD, 10 km Deep Blue AOD, 3 km Dark Target AOD, and AERONET retrievals during the overpass of Terra and Aqua for the Beijing station.</p>
Full article ">Figure 6
<p>Retrieval number histograms of the MODIS 10 km Dark Target AOD, 10 km Deep Blue AOD, 3 km Dark Target AOD, AERONET retrievals, and total PM<sub>2.5</sub> data during the overpass of Terra and Aqua for (<b>a</b>) Beijing and (<b>b</b>) Beijing-CAMS station.</p>
Full article ">Figure 7
<p>Correlation of PM<sub>2.5</sub> concentrations against the (<b>a</b>) MODIS 3 km DT AOD; (<b>b</b>) MODIS 10 km DT AOD; (<b>c</b>) MODIS 10 km DB AOD; and (<b>d</b>) AERONET AOD for the Beijing-CAMS station.</p>
Full article ">Figure 8
<p>Correlation of PM<sub>2.5</sub> concentrations against the (<b>a</b>) MODIS 3 km DT AOD; (<b>b</b>) MODIS 10 km DT AOD; (<b>c</b>) MODIS 10 km DB AOD; and (<b>d</b>) AERONET AOD for the Beijing station.</p>
Full article ">Figure 9
<p>Comparisons of the monthly average AOD of (<b>a</b>) MODIS 3 km DT; (<b>b</b>) MODIS 10 km DT; and (<b>c</b>) MODIS 10 km DB with the AERONET retrieved monthly average AOD for the Beijing and Beijing-CAMS stations.</p>
Full article ">
10421 KiB  
Article
Multi-Scale Analysis of Very High Resolution Satellite Images Using Unsupervised Techniques
by Jérémie Sublime, Andrés Troya-Galvis and Anne Puissant
Remote Sens. 2017, 9(5), 495; https://doi.org/10.3390/rs9050495 - 18 May 2017
Cited by 7 | Viewed by 7671
Abstract
This article is concerned with the use of unsupervised methods to process very high resolution satellite images with minimal or little human intervention. In a context where more and more complex and very high resolution satellite images are available, it has become increasingly [...] Read more.
This article is concerned with the use of unsupervised methods to process very high resolution satellite images with minimal or little human intervention. In a context where more and more complex and very high resolution satellite images are available, it has become increasingly difficult to propose learning sets for supervised algorithms to process such data and even more complicated to process them manually. Within this context, in this article we propose a fully unsupervised step by step method to process very high resolution images, making it possible to link clusters to the land cover classes of interest. For each step, we discuss the various challenges and state of the art algorithms to make the full process as efficient as possible. In particular, one of the main contributions of this article comes in the form of a multi-scale analysis clustering algorithm that we use during the processing of the image segments. Our proposed methods are tested on a very high resolution image (Pléiades) of the urban area around the French city of Strasbourg and show relevant results at each step of the process. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Step by step approach to image processing.</p>
Full article ">Figure 2
<p>Examples of over-segmentation and under-segmentation. (<b>a</b>) Example of an over-segmentation on two houses that could be fixed during the clustering step: the algorithm may still detect that these two segments are part of the same cluster; (<b>b</b>) example of an under-segmentation where the white object in the middle of the lake was not detected during the segmentation step and will never be since it is now merged with a lake segment.</p>
Full article ">Figure 3
<p>Illustration of the MRF clustering problem with very few features: in this example, we try to guess the cluster of the central segment based on five features and the clusters of its neighbor segments (identified using the colors).</p>
Full article ">Figure 4
<p>Example of an affinity matrix: Diagonal values indicate whether or not the clusters are forming compact areas (high value) or are scattered elements in the image (low value). Non-diagonal elements indicate which clusters are often neighbors on the image (high value) or incompatible neighbors (low value).</p>
Full article ">Figure 5
<p>(<b>Left</b>) the metropolitan area of Strasbourg (Spotimage ©CNES, 2012); (<b>right</b>) extract of the Pan-sharpened Pléiades image (Airbus ©CNES, 2012).</p>
Full article ">Figure 6
<p>Expert classes (<b>a</b>) and hierarchical classes retained for the experiments (<b>b</b>).</p>
Full article ">Figure 7
<p>Example of reference data from geographic information systems (GIS). (<b>a</b>) GIS labeled data; (<b>b</b>) contours of the GIS polygons.</p>
Full article ">Figure 8
<p>Expert classes in grey (<b>right</b>) and hierarchical clusters extracted from the confusion matrices <math display="inline"> <semantics> <mi mathvariant="sans-serif">Ω</mi> </semantics> </math> found by our proposed method (<b>left</b>): The plain arrows highlight strong links, dashed arrows mild links and dotted arrows weak links. The arrows and characters in red highlight potentially armful errors in the clusters or their hierarchy when compared with the expected classes.</p>
Full article ">Figure 9
<p>Original image (extract), reference data images and results using different algorithms looking for six clusters. (<b>a</b>) Original image, Pléiades ©Airbus, CNES 2012; (<b>b</b>) reference data ©EMS 2012: raw polygons; (<b>c</b>) hybrid reference data; (<b>d</b>) multi-scale SR-ICM at the six clusters’ scale; (<b>e</b>) SOM algorithm [<a href="#B4-remotesensing-09-00495" class="html-bibr">4</a>] with six clusters; (<b>f</b>) EM algorithm with six clusters.</p>
Full article ">Figure 10
<p>Original image (extract), reference data and our algorithm at scales of six and 10 clusters. (<b>a</b>) Original image, Pléiades ©Airbus, CNES 2012; (<b>b</b>) hybrid reference data; (<b>c</b>) multi-scale SR-ICM at the six clusters’ scale; (<b>d</b>) multi-scale SR-ICM at the 10 clusters’ scale.</p>
Full article ">
4065 KiB  
Article
Cost-Effective Class-Imbalance Aware CNN for Vehicle Localization and Categorization in High Resolution Aerial Images
by Feimo Li, Shuxiao Li, Chengfei Zhu, Xiaosong Lan and Hongxing Chang
Remote Sens. 2017, 9(5), 494; https://doi.org/10.3390/rs9050494 - 18 May 2017
Cited by 21 | Viewed by 7609
Abstract
Joint vehicle localization and categorization in high resolution aerial images can provide useful information for applications such as traffic flow structure analysis. To maintain sufficient features to recognize small-scaled vehicles, a regions with convolutional neural network features (R-CNN) -like detection structure is employed. [...] Read more.
Joint vehicle localization and categorization in high resolution aerial images can provide useful information for applications such as traffic flow structure analysis. To maintain sufficient features to recognize small-scaled vehicles, a regions with convolutional neural network features (R-CNN) -like detection structure is employed. In this setting, cascaded localization error can be averted by equally treating the negatives and differently typed positives as a multi-class classification task, but the problem of class-imbalance remains. To address this issue, a cost-effective network extension scheme is proposed. In it, the correlated convolution and connection costs during extension are reduced by feature map selection and bi-partite main-side network construction, which are realized with the assistance of a novel feature map class-importance measurement and a new class-imbalance sensitive main-side loss function. By using an image classification dataset established from a set of traditional real-colored aerial images with 0.13 m ground sampling distance which are taken from the height of 1000 m by an imaging system composed of non-metric cameras, the effectiveness of the proposed network extension is verified by comparing with its similarly shaped strong counter-parts. Experiments show an equivalent or better performance, while requiring the least parameter and memory overheads are required. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A typical convolutional neural network (CNN) structure, with feature and difference maps produced by the forward and backward propagations. SW: station wagon; WT: working truck.</p>
Full article ">Figure 2
<p>Illustration of the semantic meaning of the convolutional kernels. The raw input image is displayed in the <span class="html-italic">Raw Image</span> column; the six feature maps produced by six different kernels at the CONV5 layer are shown in the <span class="html-italic">Feature Map</span> column; and six arrays of local image crops on which the top six feature map activations are produced are shown in the <span class="html-italic">Top Activation Image Crops</span> column.</p>
Full article ">Figure 3
<p>The general structure of the proposed network enhancement method.</p>
Full article ">Figure 4
<p>The first-order term of the Taylor expansion in Equation (<a href="#FD8-remotesensing-09-00494" class="html-disp-formula">8</a>). <math display="inline"> <semantics> <mfrac> <mrow> <mo>∂</mo> <mi>P</mi> <mfenced separators="" open="(" close=")"> <mrow> <mrow> <mi>y</mi> <mo>=</mo> <mi>i</mi> <mo>|</mo> </mrow> <msup> <mrow> <mi mathvariant="bold">Z</mi> </mrow> <mrow> <mo>(</mo> <mi>k</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </mrow> </mfenced> </mrow> <mrow> <mo>∂</mo> <msubsup> <mi>Z</mi> <mi>q</mi> <mrow> <mo>(</mo> <mi>k</mi> <mo>−</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> </mrow> </mfrac> </semantics> </math> denotes the feature map difference, positive, negative, and zero values marked as green, red, and black.</p>
Full article ">Figure 5
<p>Correlations of the max-activations and class-importance with the class probability of the negative class. (<b>a</b>) Max-activation vs. class probability. (<b>b</b>) Max class-importance vs. class probability.</p>
Full article ">Figure 6
<p>Scatter plots showing the distribution of the feature maps <math display="inline"> <semantics> <mfenced separators="" open="{" close="}"> <msub> <mi>Z</mi> <mi>q</mi> </msub> </mfenced> </semantics> </math> from CONV3 and CONV4 in the class-importance vs. max-activation space. (<b>a</b>) The distributions of CONV3 and CONV4 feature maps. (<b>b</b>) Feature maps correlated to the five classes by the class-importance measurement.</p>
Full article ">Figure 7
<p>(<b>a</b>) The 50 selected maps for <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>64</mn> </mrow> </semantics> </math>. (<b>b</b>) The 109 selected maps for <math display="inline"> <semantics> <mrow> <msub> <mi>N</mi> <mrow> <mi>s</mi> <mi>e</mi> <mi>l</mi> </mrow> </msub> <mo>=</mo> <mn>160</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Principle structure of the class-imbalance aware Main-Side Network.</p>
Full article ">Figure 9
<p>The t-Distribution stochastic neighbor embedding (t-SNE) -based visualization [<a href="#B73-remotesensing-09-00494" class="html-bibr">73</a>] of the negatives and vehicle types in the FC8 output space, and the three penalization modes used for <math display="inline"> <semantics> <mi mathvariant="normal">B</mi> </semantics> </math>: (<b>a</b>) global, (<b>b</b>) local, and (<b>c</b>) batch-wise.</p>
Full article ">Figure 10
<p>(<b>a</b>) A typical frame from the training sample. (<math display="inline"> <semantics> <msub> <mi mathvariant="bold">b</mi> <mn mathvariant="bold">1</mn> </msub> </semantics> </math> ∼ <math display="inline"> <semantics> <msub> <mi mathvariant="bold">b</mi> <mn mathvariant="bold">4</mn> </msub> </semantics> </math>) Typical difficult detection cases. (<b>c</b>) The close-to-vehicle region (shaded blue) and categorical sampling positions.</p>
Full article ">Figure 11
<p>The sample categories used on the three regions: Centered, Close Range, and Far Range.</p>
Full article ">Figure 12
<p>Three typical extension schemes. (<b>a</b>) Plain extension with blank kernel generated feature maps; (<b>b</b>) Plain extension with selected feature maps; (<b>c</b>) Main-Side bi-parted extension with selected feature maps.</p>
Full article ">Figure 13
<p>The five network structures studied in the experimental section. (<b>a</b>) The baseline network miniature miniature visual geometry group (VGG-M) (<span class="html-italic">Orig.M</span>) and (<b>b</b>) 16-layered VGG (<span class="html-italic">Orig.16</span>), the comparative extensions with either (<b>c</b>,<b>d</b>) the Loss of Softmax (<span class="html-italic">New Ext.</span>, <span class="html-italic">Select Ext.</span>) or (<b>e</b>,<b>f</b>) the proposed Main-Side Loss (<span class="html-italic">New S-Ext.</span>, <span class="html-italic">Select S-Ext.</span>).</p>
Full article ">Figure 14
<p>Network classification performance improvement illustrated by the established classification dataset. (<b>a</b>) Newly recognized positives after extension. (<b>b</b>) Prediction accuracies and the increments on sample categories: Centered (Cent.), Close Range (Close), and Far Range (Far).</p>
Full article ">Figure 15
<p>Overall performance comparisons between the <span class="html-italic">Orig.M</span>, <span class="html-italic">New Ext.</span> and <span class="html-italic">Select Ext.</span> under different extension sizes. (<b>a</b>) the averaged F1 scores, (<b>b</b>) the averaged accuracies. Instances where <span class="html-italic">Select Ext.</span> is comparable to <span class="html-italic">New Ext.</span> are marked by arrows.</p>
Full article ">Figure 16
<p>Efficiency comparison of extended feature maps (kernels). <math display="inline"> <semantics> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics> </math> is the quantity of all vehicles. Selected feature maps (kernels) are more effective for small extension and minority classes.</p>
Full article ">Figure 16 Cont.
<p>Efficiency comparison of extended feature maps (kernels). <math display="inline"> <semantics> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics> </math> is the quantity of all vehicles. Selected feature maps (kernels) are more effective for small extension and minority classes.</p>
Full article ">Figure 17
<p>Influences of the coefficient <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> and ReLU constraint on the overall accuracy and <math display="inline"> <semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics> </math> score in three modes. (<b>a</b>) the averaged accuracies; (<b>b</b>) the averaged <math display="inline"> <semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics> </math> scores.</p>
Full article ">Figure 18
<p>Influences of the penalization mode and the coefficient <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> on accuracy and <math display="inline"> <semantics> <mrow> <mi>F</mi> <mn>1</mn> </mrow> </semantics> </math> score for different vehicle types. <math display="inline"> <semantics> <msub> <mi>N</mi> <mrow> <mi>p</mi> <mi>o</mi> <mi>s</mi> </mrow> </msub> </semantics> </math> is the quantity of positives, which is all the vehicles.</p>
Full article ">
9559 KiB  
Article
Object-Based Detection of Linear Kinematic Features in Sea Ice
by Stefanie Linow and Wolfgang Dierking
Remote Sens. 2017, 9(5), 493; https://doi.org/10.3390/rs9050493 - 18 May 2017
Cited by 14 | Viewed by 6613
Abstract
Inhomogenities in the sea ice motion field cause deformation zones, such as leads, cracks and pressure ridges. Due to their long and often narrow shape, those structures are referred to as Linear Kinematic Features (LKFs). In this paper we specifically address the identification [...] Read more.
Inhomogenities in the sea ice motion field cause deformation zones, such as leads, cracks and pressure ridges. Due to their long and often narrow shape, those structures are referred to as Linear Kinematic Features (LKFs). In this paper we specifically address the identification and characterization of variations and discontinuities in the spatial distribution of the total deformation, which appear as LKFs. The distribution of LKFs in the ice cover of the polar oceans is an important factor influencing the exchange of heat and matter at the ocean-atmosphere interface. Current analyses of the sea ice deformation field often ignore the spatial/geographical context of individual structures, e.g., their orientation relative to adjacent deformation zones. In this study, we adapt image processing techniques to develop a method for LKF detection which is able to resolve individual features. The data are vectorized to obtain results on an object-based level. We then apply a semantic postprocessing step to determine the angle of junctions and between crossing structures. The proposed object detection method is carefully validated. We found a localization uncertainty of 0.75 pixel and a length error of 12% in the identified LKFs. The detected features can be individually traced to their geographical position. Thus, a wide variety of new metrics for ice deformation can be easily derived, including spatial parameters as well as the temporal stability of individual features. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>RGPS (Radarsat Geophysical Processor System) example data set (4 January 2006), calculated total deformation.</p>
Full article ">Figure 2
<p>Image enhancement: (<b>a</b>) Logarithm-scaled total deformation <math display="inline"> <semantics> <msub> <mi>I</mi> <mi>l</mi> </msub> </semantics> </math>; (<b>b</b>) Result of histogram equalization; (<b>c</b>) DoG (Difference of Gaussian) filtered image. red: <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mi>f</mi> </msub> <mo>&gt;</mo> <mn>0</mn> </mrow> </semantics> </math>, blue: <math display="inline"> <semantics> <mrow> <msub> <mi>I</mi> <mi>f</mi> </msub> <mo>&lt;</mo> <mn>0</mn> </mrow> </semantics> </math>.</p>
Full article ">Figure 3
<p>Image segmentation: (<b>a</b>) Segmentation result <math display="inline"> <semantics> <mover accent="true"> <mi>B</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math>; (<b>b</b>) Result of thinning <span class="html-italic">B</span>.</p>
Full article ">Figure 4
<p>Example neighborhoods/detection steps. The magenta lines symbolize detected polyline objects. (<b>a</b>) Line start; (<b>b</b>) Direction change; (<b>c</b>) Intersection.</p>
Full article ">Figure 5
<p>Example for demonstrating the problem of connecting adjacent LKFs. Dotted lines indicate single segments of the real LKF, the solid lines represent the major orientation of the LKF. In the search ellipse, the end point of line <b>B</b> can potentially be connected to line <b>C</b> or <b>D</b>. The decision is based on differences between the major orientations.</p>
Full article ">Figure 6
<p>Detected objects, different minimum length <math display="inline"> <semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics> </math> and number of detected objects <span class="html-italic">n</span>. (<b>a</b>) <math display="inline"> <semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics> </math> = 4 px, <span class="html-italic">n</span> = 208; (<b>b</b>) <math display="inline"> <semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics> </math> = 6 px, <span class="html-italic">n</span> = 160; (<b>c</b>) <math display="inline"> <semantics> <msub> <mi>l</mi> <mrow> <mi>m</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics> </math> = 8 px, <span class="html-italic">n</span> = 124.</p>
Full article ">Figure 7
<p>Intrinsic accuracy of the validation data. (<b>a</b>) Reference features in <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> </msub> </semantics> </math>. The yellow box marks the location of <a href="#remotesensing-09-00493-f007" class="html-fig">Figure 7</a>b; (<b>b</b>) Reference line example. Hatched area: mean value ± standard deviation of the seven individually determined results from the <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> </msub> </semantics> </math> data set (dashed white lines).</p>
Full article ">Figure 8
<p>Uncertainties in LKF localization. (<b>a</b>) Endpoint deviation = average of the lengths of the two magenta lines connecting the endpoints of lines <b>A</b> and <b>B</b> ; (<b>b</b>) Contributions to the localization error (magenta lines). Since <b>A</b> and <b>B</b> are sampled at different <math display="inline"> <semantics> <mrow> <mo>(</mo> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> <mo>,</mo> <mover accent="true"> <mi>y</mi> <mo stretchy="false">^</mo> </mover> <mo>)</mo> </mrow> </semantics> </math>-positions, line <b>B</b> is interpolated at the <math display="inline"> <semantics> <mover accent="true"> <mi>x</mi> <mo stretchy="false">^</mo> </mover> </semantics> </math>-positions of line <b>A</b>. The interpolated line is denoted <b>B</b><math display="inline"> <semantics> <msub> <mrow/> <mi>resampled</mi> </msub> </semantics> </math> in the figure.</p>
Full article ">Figure 9
<p>Distribution of mean endpoint distances. (<b>a</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> </msub> </semantics> </math>: N = 129, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 1.75 px; (<b>b</b>) <math display="inline"> <semantics> <msubsup> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> <mo>+</mo> </msubsup> </semantics> </math>: N = 136, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 1.25 px; (<b>c</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">10</mn> </msub> </semantics> </math>: N = 1411, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 2.75 px.</p>
Full article ">Figure 10
<p>LKF length error distribution (in percent of the total line length). (<b>a</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> </msub> </semantics> </math>: N = 129; (<b>b</b>) <math display="inline"> <semantics> <msubsup> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> <mo>+</mo> </msubsup> </semantics> </math>: N = 136; (<b>c</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">10</mn> </msub> </semantics> </math>: N = 1411.</p>
Full article ">Figure 11
<p>Distribution of localization errors. (<b>a</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> </msub> </semantics> </math>: N=129, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.75 px; (<b>b</b>) <math display="inline"> <semantics> <msubsup> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">0</mn> <mo>+</mo> </msubsup> </semantics> </math>: N = 136, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.75 px; (<b>c</b>) <math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">10</mn> </msub> </semantics> </math>: N = 1411, <math display="inline"> <semantics> <mi>σ</mi> </semantics> </math> = 0.75 px.</p>
Full article ">Figure 12
<p>Endpoint distance vs. integrated line length as an indicator of feature linearity. (<b>a</b>) Detected objects; (<b>b</b>) Reference data (<math display="inline"> <semantics> <msub> <mi mathvariant="bold">R</mi> <mn mathvariant="bold">10</mn> </msub> </semantics> </math>).</p>
Full article ">Figure 13
<p>Feature length and orientation. Lines having angles of 0<sup>°</sup> are oriented along the parallels, lines with 90<sup>°</sup> angles are oriented in meridional direction. (<b>a</b>) Feature length distribution; (<b>b</b>) Distribution of feature orientation angles.</p>
Full article ">Figure 14
<p>Subsets/mapping of gridded image data. (<b>a</b>) Divergence (1/day); (<b>b</b>) Shear (1/day).</p>
Full article ">Figure 15
<p>LKF intersection angles (bin size = 1<sup>°</sup>).</p>
Full article ">Figure 16
<p>Cost function for intersection angle tolerance values ranging from 5<sup>°</sup> to 60<sup>°</sup>, in steps of 2<sup>°</sup>. The dotted line marks the 35<sup>°</sup> threshold.</p>
Full article ">
4188 KiB  
Article
Automatic Detection of Uprooted Orchards Based on Orthophoto Texture Analysis
by Raquel Ciriza, Ion Sola, Lourdes Albizua, Jesús Álvarez-Mozos and María González-Audícana
Remote Sens. 2017, 9(5), 492; https://doi.org/10.3390/rs9050492 - 17 May 2017
Cited by 21 | Viewed by 5838
Abstract
Permanent crops, such as olive groves, vineyards and fruit trees, are important in European agriculture because of their spatial and economic relevance. Agricultural geographical databases (AGDBs) are commonly used by public bodies to gain knowledge of the extension covered by these crops and [...] Read more.
Permanent crops, such as olive groves, vineyards and fruit trees, are important in European agriculture because of their spatial and economic relevance. Agricultural geographical databases (AGDBs) are commonly used by public bodies to gain knowledge of the extension covered by these crops and to manage related agricultural subsidies and inspections. However, the updating of these databases is mostly based on photointerpretation, and thus keeping this information up-to-date is very costly in terms of time and money. This paper describes a methodology for automatic detection of uprooted orchards (parcels where fruit trees have been eliminated) based on the textural classification of orthophotos with a spatial resolution of 0.25 m. The textural features used for this classification were derived from the grey level co-occurrence matrix (GLCM) and wavelet transform, and were selected through principal components (PCA) and separability analyses. Next, a Discriminant Analysis classification algorithm was used to detect uprooted orchards. Entropy, contrast and correlation were found to be the most informative textural features obtained from the co-occurrence matrix. The minimum and standard deviation in plane 3 were the selected features based on wavelet transform. The classification based on these features achieved a true positive rate (TPR) of over 80% and an accuracy (A) of over 88%. As a result, this methodology enabled reducing the number of fields to photointerpret by 60–85%, depending on the membership threshold value selected. The proposed approach could be easily adopted by different stakeholders and could increase significantly the efficiency of agricultural database updating tasks. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study sites (in red) superimposed on the map of municipalities of Navarre, colored according to the area in hectares (ha) of orchards cultivated per municipality.</p>
Full article ">Figure 2
<p>Multi-stage diagram for detection of uprooted orchards by textural analysis of orthophotos using Haralick features based on GLCM and plane-based Wavelet features, both independently and in combination. Orchard and uprooted parcels are represented in green and yellow, respectively.</p>
Full article ">Figure 3
<p>(<b>A.1</b>) Orthophoto of an Orchard; (<b>A.2</b>) orthophoto of an Uprooted orchard (<b>B.1</b>); co-occurrence matrix for an Orchard, in real and logarithmic scales; (<b>B.2</b>) co-occurrence matrix for an Uprooted orchard (<b>C.1</b>); Wavelet plane 1 image for an Orchard (<b>C.2</b>); and wavelet plane 1 image for an Uprooted orchard.</p>
Full article ">Figure 4
<p>Loading plots for GLCM based features generated by: (<b>a</b>) the first (PC1) and the second (PC2) principal components; and (<b>b</b>) the first and the third (PC3) principal components.</p>
Full article ">Figure 5
<p>T distance between Orchard and Uprooted classes by: (<b>A</b>) GLCM based features; and (<b>B</b>) features based on wavelet planes.</p>
Full article ">Figure 6
<p>Loading plot for wavelet features generated by the first (PC1) and the second (PC2) principal.</p>
Full article ">Figure 7
<p>Examples of parcels that have been incorrectly classified.</p>
Full article ">Figure 8
<p>(<b>a</b>) Histogram showing the probability of membership in the class Uprooted obtained with <span class="html-italic">Tex_Sel_H+W</span>; (<b>b</b>) Histogram showing the probability of membership in the class Uprooted obtained with <span class="html-italic">Tex_Sel_H+W</span> confronted with the ground truth.</p>
Full article ">
12423 KiB  
Article
Signal Processing for a Multiple-Input, Multiple-Output (MIMO) Video Synthetic Aperture Radar (SAR) with Beat Frequency Division Frequency-Modulated Continuous Wave (FMCW)
by Seok Kim, Jiwoong Yu, Se-Yeon Jeon, Aulia Dewantari and Min-Ho Ka
Remote Sens. 2017, 9(5), 491; https://doi.org/10.3390/rs9050491 - 17 May 2017
Cited by 20 | Viewed by 10987
Abstract
In this paper, we present a novel signal processing method for video synthetic aperture radar (ViSAR) systems, which are suitable for operation in unmanned aerial vehicle (UAV) environments. The technique improves aspects of the system’s performance, such as the frame rate and image [...] Read more.
In this paper, we present a novel signal processing method for video synthetic aperture radar (ViSAR) systems, which are suitable for operation in unmanned aerial vehicle (UAV) environments. The technique improves aspects of the system’s performance, such as the frame rate and image size of the synthetic aperture radar (SAR) video. The new ViSAR system is based on a frequency-modulated continuous wave (FMCW) SAR structure that is combined with multiple-input multiple-output (MIMO) technology, and multi-channel azimuth processing techniques. FMCW technology is advantageous for use in low cost, small size, and lightweight systems, like small UAVs. MIMO technology is utilized for increasing the equivalent number of receiving channels in the azimuthal direction, and reducing aperture size. This effective increase is achieved using a co-array concept by means of beat frequency division (BFD) FMCW. A multi-channel azimuth processing technique is used for improving the frame rate and image size of SAR video, by suppressing the azimuth ambiguities in the receiving channels. This paper also provides analyses of the frame rate and image size of SAR video of ViSAR systems. The performance of the proposed system is evaluated using an exemplary system. The results of analyses are presented, and their validity is verified using numerical simulations. Full article
(This article belongs to the Special Issue Advances in SAR: Sensors, Methodologies, and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The collection geometry of video SAR on a circular path. SAR: synthetic aperture radar.</p>
Full article ">Figure 2
<p>The collection geometry of video SAR on a circular path: top view. CDU: cardinal direction up.</p>
Full article ">Figure 3
<p>Frame rate of video SAR on a circular path: (<b>a</b>) varying resolution at <math display="inline"> <semantics> <mrow> <mi>v</mi> <mo>=</mo> </mrow> </semantics> </math> 20 m/s; (<b>b</b>) varying platform velocity at <math display="inline"> <semantics> <mrow> <msub> <mi>ρ</mi> <mi>a</mi> </msub> <mo>=</mo> </mrow> </semantics> </math> 0.08 m.</p>
Full article ">Figure 4
<p>Doppler bandwidth limited by scene size.</p>
Full article ">Figure 5
<p>Scene size limitation of polar format algorithm (PFA).</p>
Full article ">Figure 6
<p>Collection geometry of video SAR on a circular path.</p>
Full article ">Figure 7
<p>Antenna geometry: (<b>a</b>) The actual array; (<b>b</b>) The equivalent virtual array. Tx: transmitting; Rx: receiving.</p>
Full article ">Figure 8
<p>Signal model of the generic multiple-input multiple-output (MIMO) video SAR.</p>
Full article ">Figure 9
<p>Signal processing block diagram for MIMO video SAR with a beat frequency division FMCW waveform. FMCW: frequency-modulated continuous wave; BFD: beat frequency division.</p>
Full article ">Figure 10
<p>System block diagram for MIMO video SAR with beat frequency division FMCW waveform in the case of two Tx channels and two Rx channels (<span class="html-italic">M</span> = 2, <span class="html-italic">N</span> = 2). DDS: direct digital synthesis; LO: local oscillator; IF: intermediate frequency (IF); Tx: transmit; ADC: analog-to-digital conversion.</p>
Full article ">Figure 11
<p>Collection geometry.</p>
Full article ">Figure 12
<p>Results of BFD FMCW waveform demodulation: (<b>a</b>) before demodulation; (<b>b</b>) after demodulation.</p>
Full article ">Figure 13
<p>Azimuth spectrum before MCRA: (<b>a</b>) three-dimensional plot; (<b>b</b>) two-dimensional plot at target. MCRA: multi-channel reconstruction algorithm.</p>
Full article ">Figure 14
<p>Azimuth spectrum after MCRA: (<b>a</b>) three-dimensional plot; (<b>b</b>) two-dimensional plot at target.</p>
Full article ">Figure 15
<p>SAR image: (<b>a</b>) two-dimensional plot; (<b>b</b>) contour plot (zoomed).</p>
Full article ">Figure 16
<p>Impulse response analysis (<b>a</b>) in down-range; (<b>b</b>) in cross-range. PSLR: peak sidelobe ratio</p>
Full article ">Figure 17
<p>Collection geometry and scene geometry.</p>
Full article ">Figure 18
<p>Simulation results before MCRA: (<b>a</b>) three-dimensional plot; (<b>b</b>) two-dimensional plot; (<b>c</b>) azimuth frequency cut at Scatterer E.</p>
Full article ">Figure 19
<p>Simulation results after MCRA: (<b>a</b>) three-dimensional plot; (<b>b</b>) two-dimensional plot; (<b>c</b>) azimuth frequency cut at Scatterer E.</p>
Full article ">Figure 20
<p>SAR video frame at aspect angle 0°.</p>
Full article ">Figure 21
<p>SAR video frames before image rotation at aspect angle: (<b>a</b>) 20°; (<b>b</b>) 30°; (<b>c</b>) 40°; (<b>d</b>) 50°; (<b>e</b>) 60°; (<b>f</b>) 70°.</p>
Full article ">Figure 22
<p>SAR video frames after image rotation at aspect angle: (<b>a</b>) 20°; (<b>b</b>) 30°; (<b>c</b>) 40°; (<b>d</b>) 50°; (<b>e</b>) 60°; (<b>f</b>) 70°.</p>
Full article ">
20328 KiB  
Article
Exploiting the Redundancy of Multiple Overlapping Aerial Images for Dense Image Matching Based Digital Surface Model Generation
by Wojciech A. Dominik
Remote Sens. 2017, 9(5), 490; https://doi.org/10.3390/rs9050490 - 17 May 2017
Cited by 8 | Viewed by 5498
Abstract
In recent years, significant development in the domain of dense image matching (DIM) can be observed. Meanwhile, in most countries, aerial images are acquired countrywide on a regular basis with decreasing time intervals and increasing image overlaps. Therefore, aerial images represent a growing [...] Read more.
In recent years, significant development in the domain of dense image matching (DIM) can be observed. Meanwhile, in most countries, aerial images are acquired countrywide on a regular basis with decreasing time intervals and increasing image overlaps. Therefore, aerial images represent a growing potential for digital surface model (DSM) acquisition and updating. Surface reconstruction by image matching, in most cases, requires dealing with the redundancy caused by multiple overlapping images. Many approaches considering this redundancy in the surface reconstruction process have been developed. However, there is no commonly accepted procedure for this task. From the experience of the author, it can be stated that currently applied methods show some limitations regarding DSM generation from aerial images. Therefore, it is claimed that there is room for the development of new algorithms for integration of dense image matching results from multiple stereo pairs. Methods dedicated to aerial image based DSM generation that would exploit the specificity of this task are desirable. In this paper, an approach to compute the DSM elevations from redundant elevation hypotheses derived by pairwise dense image matching is presented. The proposed approach takes into account the base-to-height (b/h) ratio of stereo pairs, the distribution of elevation hypotheses from multiple stereo pairs and the neighboring elevations. An algorithm of selection of the elevation hypotheses used for the calculation of the final DSM elevation for each grid cell was developed. The algorithm was used to generate the DSM based on two sets of aerial images having significantly different acquisition parameters. The results were compared to the models obtained from several commonly used software packages for image based DSM generation. The quality assessment was carried out by visual inspection of terrain profiles and shaded surface display as well as by the planarity control of flat parts of the terrain. The assessment of the results showed that the application of the proposed algorithm can bring some advantages and it can contribute to improving the quality of the DSM. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Elevation hypotheses along a terrain profile derived from all available stereo pairs. The b/h ratios of the stereo pairs are differentiated by color. (<b>b</b>) The terrain profile (red line) shown on the true orthophoto. (<b>c</b>) The standard deviation of the distribution of elevation hypotheses along the terrain profile (black line) and the threshold calculated according to Equation (3) for a ground sampling distance (GSD) of 20 cm (red dotted line).</p>
Full article ">Figure 2
<p>(<b>a</b>) Profile of a point cloud (black dots) derived from multi-view dense image matching carried out with the use of Pix4d software. The red line represents the digital surface model (DSM) generated from the point cloud with the use of Pix4d software. (<b>b</b>) The terrain profile (red line) shown on the true orthophoto.</p>
Full article ">Figure 3
<p>(<b>a</b>) DSMs generated based on two groups of elevation hypotheses. It can be seen that low b/h ratio stereo pairs show better performance in vegetation reconstruction, whereas high b/h ratio stereo pairs give a smoother surface in paved areas. (<b>b</b>) The terrain profile (red line) shown on the true orthophoto.</p>
Full article ">Figure 4
<p>(<b>a</b>) Map of the standard deviations of the distribution of elevation hypotheses; and (<b>b</b>) true orthophoto of the same area.</p>
Full article ">Figure 5
<p>Elevation hypotheses along a terrain profile derived from all available stereo pairs. The b/h ratios of the stereo pairs are differentiated by color. The black line represents the median of the elevation hypotheses from stereo pairs with the lowest b/h ratio (only neighboring images along a flight line). The dotted black lines represents the threshold calculated according to Equation (3) for a GSD of 20 cm, which is used to select the elevation hypotheses from all stereo pairs lying close to the median.</p>
Full article ">Figure 6
<p>(<b>a</b>) Elevation hypotheses derived from stereo pairs with the lowest b/h ratio (only neighboring images along a flight line). (<b>b</b>) Elevation hypotheses assigned to the cluster (red dots) according to the procedure described in <a href="#sec2dot4-remotesensing-09-00490" class="html-sec">Section 2.4</a>. (<b>c</b>) Screenshot from Google Street View showing the analyzed object.</p>
Full article ">Figure 7
<p>(<b>a</b>,<b>b</b>) Profile showing elevation hypotheses derived from stereo pairs with the lowest b/h ratio (only neighboring images along a flight line) (black dots) and the DSM (red line): (<b>a</b>) generated with the use of methods described in <a href="#sec2dot2-remotesensing-09-00490" class="html-sec">Section 2.2</a>, <a href="#sec2dot3-remotesensing-09-00490" class="html-sec">Section 2.3</a> and <a href="#sec2dot4-remotesensing-09-00490" class="html-sec">Section 2.4</a>; and (<b>b</b>) after the procedure described in <a href="#sec2dot5-remotesensing-09-00490" class="html-sec">Section 2.5</a>. The red circle indicates elevation hypotheses included to the DSM. (<b>c</b>,<b>d</b>) Color coded DSM of the same area: (<b>c</b>) generated with the use of methods described in <a href="#sec2dot2-remotesensing-09-00490" class="html-sec">Section 2.2</a>, <a href="#sec2dot3-remotesensing-09-00490" class="html-sec">Section 2.3</a> and <a href="#sec2dot4-remotesensing-09-00490" class="html-sec">Section 2.4</a>; and (<b>d</b>) after the procedure described in <a href="#sec2dot5-remotesensing-09-00490" class="html-sec">Section 2.5</a>. The red line indicates the profile shown above.</p>
Full article ">Figure 8
<p>Flowchart of the proposed algorithm.</p>
Full article ">Figure 9
<p>Visual analysis of a terrain profile for the Enzersdorf test field (20 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–6) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 9 Cont.
<p>Visual analysis of a terrain profile for the Enzersdorf test field (20 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–6) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 9 Cont.
<p>Visual analysis of a terrain profile for the Enzersdorf test field (20 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–6) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 10
<p>Visual analysis of a terrain profile for the Elbląg test field (5 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–8) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 10 Cont.
<p>Visual analysis of a terrain profile for the Elbląg test field (5 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–8) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 10 Cont.
<p>Visual analysis of a terrain profile for the Elbląg test field (5 cm GSD). (<b>a</b>) Profile shown on the true orthophoto. Red lines indicate parts of the profile analyzed in detail below. (<b>b</b>) Overview of the compared DSMs along the profile. Red rectangles indicate parts of the profile analyzed in detail below. (1–8) Parts of the profile analyzed in detail. The DSMs derived with the use of different software solutions are differentiated by color.</p>
Full article ">Figure 11
<p>Visual analysis of the shaded surface display for the Enzersdorf test field (20 cm GSD): (<b>a</b>) true orthophoto of the analyzed area; (<b>b</b>) proposed algorithm; (<b>c</b>) SURE, all stereo pairs; (<b>d</b>) SURE, lowest b/h ratio stereo pairs; (<b>e)</b> Match-t-DSM; (<b>f</b>) Pix4d; (<b>g</b>) Agisoft Photoscan, high mode; and (<b>h</b>) Agisoft Photoscan, ultrahigh mode.</p>
Full article ">Figure 12
<p>Visual analysis of the shaded surface display for the Elbląg test field (5 cm GSD): (<b>a</b>) true orthophoto of the analyzed area; (<b>b</b>) proposed algorithm; (<b>c</b>) SURE, all stereo pairs; (<b>d</b>) SURE, lowest b/h ratio stereo pairs; (<b>e</b>) Match-t-DSM; (<b>f</b>) Pix4d; (<b>g</b>) Agisoft Photoscan, high mode; and (<b>h</b>) Agisoft Photoscan, ultrahigh mode.</p>
Full article ">Figure 13
<p>Part of the selected samples (red squares) for the planarity control carried out for the Elbląg test field (5 cm GSD) shown on the true orthophoto.</p>
Full article ">Figure 14
<p>Part of the selected samples (blue squares) for the planarity control in shaded areas carried out for the Elbląg test field (5 cm GSD) shown on the true orthophoto.</p>
Full article ">
8391 KiB  
Article
Learning Low Dimensional Convolutional Neural Networks for High-Resolution Remote Sensing Image Retrieval
by Weixun Zhou, Shawn Newsam, Congmin Li and Zhenfeng Shao
Remote Sens. 2017, 9(5), 489; https://doi.org/10.3390/rs9050489 - 17 May 2017
Cited by 193 | Viewed by 12159
Abstract
Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity [...] Read more.
Learning powerful feature representations for image retrieval has always been a challenging task in the field of remote sensing. Traditional methods focus on extracting low-level hand-crafted features which are not only time-consuming but also tend to achieve unsatisfactory performance due to the complexity of remote sensing images. In this paper, we investigate how to extract deep feature representations based on convolutional neural networks (CNNs) for high-resolution remote sensing image retrieval (HRRSIR). To this end, several effective schemes are proposed to generate powerful feature representations for HRRSIR. In the first scheme, a CNN pre-trained on a different problem is treated as a feature extractor since there are no sufficiently-sized remote sensing datasets to train a CNN from scratch. In the second scheme, we investigate learning features that are specific to our problem by first fine-tuning the pre-trained CNN on a remote sensing dataset and then proposing a novel CNN architecture based on convolutional layers and a three-layer perceptron. The novel CNN has fewer parameters than the pre-trained and fine-tuned CNNs and can learn low dimensional features from limited labelled images. The schemes are evaluated on several challenging, publicly available datasets. The results indicate that the proposed schemes, particularly the novel CNN, achieve state-of-the-art performance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The typical architecture of convolutional neural networks (CNNs). The rectified linear units (ReLU) layers are ignored here for conciseness.</p>
Full article ">Figure 2
<p>Flowchart of the first scheme: deep features extracted from Fc2 and Conv5 layers of the pre-trained CNN model. For conciseness, we refer to features extracted from Fc1–2 and Conv1–5 layers as Fc features (Fc1, Fc2) and Conv features (Conv1, Conv2, Conv3, Conv4, Conv5), respectively.</p>
Full article ">Figure 3
<p>Flowchart of extracting features from the fine-tuned layers. Dropout1 and dropout2 are dropout layers which are used to control overfitting. <span class="html-italic">N</span> is the number of image classes in the target dataset.</p>
Full article ">Figure 4
<p>The overall structure of the proposed, novel CNN architecture. There are five linear convolution layers and an mlpconv layer followed by a global average pooling layer.</p>
Full article ">Figure 5
<p>Sample images from the University of California, Merced dataset (UCMD) dataset. From the top left to bottom right: agricultural, airplane, baseball diamond, beach, buildings, chaparral, dense residential, forest, freeway, golf course, harbor, intersection, medium density residential, mobile home park, overpass, parking lot, river, runway, sparse residential, storage tanks, and tennis courts.</p>
Full article ">Figure 6
<p>Sample images from the remote sensing dataset (RSD). From the top left to bottom right: airport, beach, bridge, commercial area, desert, farmland, football field, forest, industrial area, meadow, mountain, park, parking, pond, port, railway station, residential area, river, and viaduct.</p>
Full article ">Figure 7
<p>Sample images from the RSSCN7 dataset. From left to right: grass, field, industry, lake, resident, and parking.</p>
Full article ">Figure 8
<p>Samples images from the aerial image dataset (AID). From the top left to bottom right: airport, bare land, baseball field, beach, bridge, center, church, commercial, dense residential, desert, farmland, forest, industrial, meadow, medium residential, mountain, park, parking, playground, pond, port, railway station, resort, river, school, sparse residential, square, stadium, storage tanks, and viaduct.</p>
Full article ">Figure 9
<p>The effect of ReLU on Fc1 and Fc2 features. (<b>a</b>) Results on UCMD dataset; (<b>b</b>) Results on RSD dataset; (<b>c</b>) Results on RSSCN7 dataset; (<b>d</b>) Results on AID dataset. For Fc1_ReLU and Fc2_ReLU features, ReLU is applied to the extracted Fc features.</p>
Full article ">Figure 10
<p>The effect of ReLU on Conv features. (<b>a</b>) Results on UCMD dataset; (<b>b</b>) Results on RSD dataset; (<b>c</b>) Results on RSSCN7 dataset; (<b>d</b>) Results on AID dataset. For BOVW_ReLU, VLAD_ReLU, and IFK_ReLU features, ReLU is applied to the Conv features before feature aggregation.</p>
Full article ">Figure 11
<p>Comparison between images of the same class from the four datasets. From left to right, the images are from UCMD, RSSCN7, RSD, and AID respectively.</p>
Full article ">Figure 12
<p>The number of parameters contained in VGGM, fine-tuned VGGM, and LDCNN.</p>
Full article ">
5054 KiB  
Article
Estimation and Mapping of Winter Oilseed Rape LAI from High Spatial Resolution Satellite Data Based on a Hybrid Method
by Chuanwen Wei, Jingfeng Huang, Lamin R. Mansaray, Zhenhai Li, Weiwei Liu and Jiahui Han
Remote Sens. 2017, 9(5), 488; https://doi.org/10.3390/rs9050488 - 16 May 2017
Cited by 59 | Viewed by 7679
Abstract
Leaf area index (LAI) is a key input in models describing biosphere processes and has widely been used in monitoring crop growth and in yield estimation. In this study, a hybrid inversion method is developed to estimate LAI values of winter oilseed rape [...] Read more.
Leaf area index (LAI) is a key input in models describing biosphere processes and has widely been used in monitoring crop growth and in yield estimation. In this study, a hybrid inversion method is developed to estimate LAI values of winter oilseed rape during growth using high spatial resolution optical satellite data covering a test site located in southeast China. Based on PROSAIL (coupling of PROSPECT and SAIL) simulation datasets, nine vegetation indices (VIs) were analyzed to identify the optimal independent variables for estimating LAI values. The optimal VIs were selected using curve fitting methods and the random forest algorithm. Hybrid inversion models were then built to determine the relationships between optimal simulated VIs and LAI values (generated by the PROSAIL model) using modeling methods, including curve fitting, k-nearest neighbor (kNN), and random forest regression (RFR). Finally, the mapping and estimation of winter oilseed rape LAI using reflectance obtained from Pleiades-1A, WorldView-3, SPOT-6, and WorldView-2 were implemented using the inversion method and the LAI estimation accuracy was validated using ground-measured datasets acquired during the 2014–2015 growing season. Our study indicates that based on the estimation results derived from different datasets, RFR is the optimal modeling algorithm amidst curve fitting and kNN with R2 > 0.954 and RMSE <0.218. Using the optimal VIs, the remote sensing-based mapping of winter oilseed rape LAI yielded an accuracy of R2 = 0.520 and RMSE = 0.923 (RRMSE = 93.7%). These results have demonstrated the potential operational applicability of the hybrid method proposed in this study for the mapping and retrieval of winter oilseed rape LAI values at field scales using multi-source and high spatial resolution optical remote sensing datasets. Details provided by this high resolution mapping cannot be easily discerned at coarser mapping scales and over larger spatial extents that usually employ lower resolution satellite images. Our study therefore has significant implications for field crop monitoring at local scales, providing relevant data for agronomic practices and precision agriculture. Full article
(This article belongs to the Special Issue Earth Observations for Precision Farming in China (EO4PFiC))
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the experimental field and distribution of experimental plots. The study area is shown on a Pleiades-1A image acquired on 4 December 2014.</p>
Full article ">Figure 2
<p>Optimization of random forest parameters (ntree and mtry) using RMSEC. The optimal ntree and mtry that yielded the lowest RMSEC are identified with black arrows.</p>
Full article ">Figure 3
<p>Measuring the variables (VIs) importance in predicating oilseed rape LAI using RFR method. The models of each growth stage were developed using the optimal combination of mtry and ntree. Higher % IncMSE indicates greater variable importance.</p>
Full article ">Figure 4
<p>Selecting the optimal number of variables (VIs) using backward elimination search function. The RMSECV is calculated from the calibration datasets (<span class="html-italic">n</span> = 1944) using five-fold cross validation.</p>
Full article ">Figure 5
<p>Spatial distribution of oilseed rape LAI estimated from high spatial resolution images and optimal VIs (<b>a</b>) SR, PVI, MSR, TSAVI, and ARVI; (<b>b</b>) SR, NDVI, and TSAVI; (<b>c</b>) SR, NDVI, NLI, MSR, and TSAVI; and (<b>d</b>) MSR, respectively, for Pleiades-1A, WV-3, SPOT-6, and WV-2, based on the RFR method.</p>
Full article ">Figure 6
<p>Relationship between measured oilseed rape LAI values and oilseed rape LAI values estimated using the RFR modelwith the optimal predictive variables.</p>
Full article ">
18576 KiB  
Article
Physically Based Susceptibility Assessment of Rainfall-Induced Shallow Landslides Using a Fuzzy Point Estimate Method
by Hyuck-Jin Park, Jung-Yoon Jang and Jung-Hyun Lee
Remote Sens. 2017, 9(5), 487; https://doi.org/10.3390/rs9050487 - 16 May 2017
Cited by 25 | Viewed by 7686
Abstract
The physically based model has been widely used in rainfall-induced shallow landslide susceptibility analysis because of its capacity to reproduce the physical processes governing landslide occurrence and a higher predictive capability. However, one of the difficulties in applying the physically based model is [...] Read more.
The physically based model has been widely used in rainfall-induced shallow landslide susceptibility analysis because of its capacity to reproduce the physical processes governing landslide occurrence and a higher predictive capability. However, one of the difficulties in applying the physically based model is that uncertainties arising from spatial variability, measurement errors, and incomplete information apply to the input parameters and analysis procedure. Uncertainties have been recognized as an important cause of mismatch between predicted and observed distributions of landslide occurrence. Therefore, probabilistic analysis has been used to quantify the uncertainties. However, some uncertainties, because of incomplete information, cannot be managed satisfactorily using a probabilistic approach. Fuzzy set theory is applicable in this case. In this study, in order to handle uncertainty propagation through a physical model, fuzzy set theory, coupled with the vertex method and the point estimate method, was adopted for regional landslide susceptibility assessment. The proposed approach was used to evaluate susceptibility to rainfall-induced shallow landslides for a regional study area, and the analysis results were compared with landslide inventory to evaluate the performance of the proposed approach. The AUC values arising from the landslide susceptibility analyses using the proposed approach and probabilistic analysis were 0.734 and 0.736, respectively. However, when the COV values of the input parameters were reduced, the AUC values of the proposed approach and the probabilistic analysis were reduced to 0.722 and 0.688, respectively. It means that the performance of the fuzzy approach is similar to that of probabilistic analysis but is more robust against variation of input parameters. Thus, at catchment scale, the fuzzy approach can respond appropriately to the uncertainties inherent in physically based landslide susceptibility analysis, and is especially advantageous when the amount of quality data is very limited. Full article
(This article belongs to the Special Issue Remote Sensing of Landslides)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The triangular membership function and α-cut.</p>
Full article ">Figure 2
<p>Geological map and the locations of the landslides.</p>
Full article ">Figure 3
<p>The distributions of (<b>a</b>) slope angle and (<b>b</b>) elevation.</p>
Full article ">Figure 4
<p>The distributions of (<b>a</b>) friction angle, (<b>b</b>) cohesion, (<b>c</b>) unit weight, (<b>d</b>) hydraulic conductivity, and (<b>e</b>) soil thickness.</p>
Full article ">Figure 4 Cont.
<p>The distributions of (<b>a</b>) friction angle, (<b>b</b>) cohesion, (<b>c</b>) unit weight, (<b>d</b>) hydraulic conductivity, and (<b>e</b>) soil thickness.</p>
Full article ">Figure 5
<p>Confusion matrix.</p>
Full article ">Figure 6
<p>Map showing slope failure probability predicted using fuzzy PEM.</p>
Full article ">Figure 7
<p>ROC graph comparing the analysis results. A: fuzzy PEM, B: probabilistic analysis (Monte Carlo simulation), C: deterministic analysis, D: fuzzy PEM with reduced COV, E: probabilistic analysis with reduced COV, F: fuzzy PEM with <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>1</mn> <mtext> </mtext> <mi mathvariant="sans-serif">σ</mi> </mrow> </semantics> </math>, G: fuzzy PEM with <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>3</mn> <mtext> </mtext> <mi mathvariant="sans-serif">σ</mi> </mrow> </semantics> </math>.</p>
Full article ">Figure 8
<p>Map showing slope failure probability predicted using Monte Carlo simulation.</p>
Full article ">Figure 9
<p>Map showing the factor of safety predicted using the deterministic analysis.</p>
Full article ">Figure 10
<p>Maps showing slope failure probability predicted using (<b>a</b>) fuzzy PEM with reduced COV and (<b>b</b>) Monte Carlo simulation with reduced COV.</p>
Full article ">Figure 11
<p>Maps showing slope failure probability predicted using fuzzy PEM with (<b>a</b>) fuzzy numbers of mean <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>1</mn> <mtext> </mtext> <mi mathvariant="sans-serif">σ</mi> </mrow> </semantics> </math> and (<b>b</b>) fuzzy numbers of mean <math display="inline"> <semantics> <mrow> <mo>±</mo> <mn>3</mn> <mtext> </mtext> <mi mathvariant="sans-serif">σ</mi> </mrow> </semantics> </math>.</p>
Full article ">
14148 KiB  
Article
Rural Settlement Subdivision by Using Landscape Metrics as Spatial Contextual Information
by Xinyu Zheng, Bowen Wu, Melanie Valerie Weston, Jing Zhang, Muye Gan, Jinxia Zhu, Jinsong Deng, Ke Wang and Longmei Teng
Remote Sens. 2017, 9(5), 486; https://doi.org/10.3390/rs9050486 - 16 May 2017
Cited by 35 | Viewed by 9330
Abstract
Multiple policy projects have changed land use and land cover (LULC) in China’s rural regions over the past years, resulting in two types of rural settlements: new-fashioned and old-fashioned. Precise extraction of and discrimination between these two settlement types are vital for sustainable [...] Read more.
Multiple policy projects have changed land use and land cover (LULC) in China’s rural regions over the past years, resulting in two types of rural settlements: new-fashioned and old-fashioned. Precise extraction of and discrimination between these two settlement types are vital for sustainable land use development. It is difficult to identify these two types via remote sensing images due to their similarities in spectrum, texture, and geometry. This study attempts to discriminate different types of rural settlements by using a spatial contextual information extraction method based on Gaofen 2 (GF-2) images, which integrate hierarchical multi-scale segmentation and landscape analysis. A preliminary LULC map was derived by using only traditional spectral and geometrical features from a finer scale. Subsequently, a vertical connection was built between superobjects and subobjects, and landscape metrics were computed. The vertical connection was used for assigning landscape contextual information to subobjects. Finally, a classification phase was conducted, in which only multi-scale contextual information was adopted, to discriminate between new-fashioned and old-fashioned rural settlements. Compared with previous studies on multi-scale contextual information, this paper employs landscape metrics to quantify contextual characteristics, rather than traditional spectral, textural, and topological relationship information, from superobjects. Our findings indicate that this approach effectively identified and discriminated two types of rural settlements, with accuracies over 80% for both producers and users. A comparison with a conventional top-down hierarchical classification scheme showed that this novel approach improved accuracy, precision, and recall. Our results confirm that multi-scale contextual information with landscape metrics provides valuable spatial information for classification, and indicates the practicability, applicability, and effectiveness of this synthesized approach in distinguishing different types of rural settlements. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Image examples both on the ground and satellite for new-fashioned and old-fashioned rural settlements.</p>
Full article ">Figure 2
<p>New-fashioned and old-fashioned rural settlements show different landscape characteristics. Based on the multi-level segmentation, we can identify settlement rooftops from a finer scale, while we can discriminate different settlement communities from a coarse scale.</p>
Full article ">Figure 3
<p>The study area Longxiang subdistrict is a typical rural region in Zhejiang Province, and a GF-2 image of study area subset is showed in true color.</p>
Full article ">Figure 4
<p>Flow chart of the classification framework in this paper including two-level segmentation, contextual information extraction, classification and comparison method.</p>
Full article ">Figure 5
<p>Example of LULC cover types throughout the rural area: (<b>a</b>) greenhouse; (<b>b</b>,<b>c</b>) represent the new-fashioned rural settlements; (<b>d</b>) industrial warehouse; (<b>e</b>,<b>f</b>) represent the old-fashioned rural settlements examples.</p>
Full article ">Figure 6
<p>ESP result plot diagram. Local variance (LV, black circles and gray line) and rates of change of LV (ROC-LV, black triangles and black line) is plotted against corresponding scale. Grey dotted vertical lines indicate the optimal scale parameters selected.</p>
Full article ">Figure 7
<p>Output map from SVM classifier in preliminary classification phase, for the whole study area (<b>a</b>); some subset examples for segmentation result in coarser (<b>b</b>) and finer scale (<b>c</b>).</p>
Full article ">Figure 8
<p>The discrimination results by using contextual information (<b>a</b>) and top-down hierarchical approach (<b>b</b>). The black arrows indicate that some communities can be discriminated accurately by utilizing contextual information, while by using conventional single-scale object features, these two types of settlements remain mixed. Some subset examples are collected on the final map: (<b>c</b>) aggregation of new-fashioned settlement; (<b>d</b>) example of old-fashioned settlement, and mixed scenario of these two types (<b>e</b>) and (<b>f</b>); the land-use planning map of rural villages as reference data is showed in (<b>e</b>,<b>f</b>).</p>
Full article ">Figure 9
<p>Feature values comparison between settlement types in spectrum (NDVI), geometry (density), and context. PLAND VEG: the PLAND of vegetation, PD: patch density, ED: edge density, LSI: landscape shape index, SHDI: Shannon’s diversity index. Feature values were normalized between 0 and 100.</p>
Full article ">
10240 KiB  
Article
Evaluation of the Plant Phenology Index (PPI), NDVI and EVI for Start-of-Season Trend Analysis of the Northern Hemisphere Boreal Zone
by Paulina Karkauskaite, Torbern Tagesson and Rasmus Fensholt
Remote Sens. 2017, 9(5), 485; https://doi.org/10.3390/rs9050485 - 16 May 2017
Cited by 112 | Viewed by 12668
Abstract
Satellite remote sensing of plant phenology provides an important indicator of climate change. However, start of the growing season (SOS) estimates in Northern Hemisphere boreal forest areas are known to be challenged by the presence of seasonal snow cover and limited seasonality in [...] Read more.
Satellite remote sensing of plant phenology provides an important indicator of climate change. However, start of the growing season (SOS) estimates in Northern Hemisphere boreal forest areas are known to be challenged by the presence of seasonal snow cover and limited seasonality in the greenness signal for evergreen needleleaf forests, which can both bias and impede trend estimates of SOS. The newly developed Plant Phenology Index (PPI) was specifically designed to overcome both problems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS) data (2000–2014) to analyze the ability of PPI for estimating start of season (SOS) in boreal regions of the Northern Hemisphere, in comparison to two other widely applied indices for SOS retrieval: the Normalized Difference Vegetation Index (NDVI) and the Enhanced Vegetation Index (EVI). Satellite-based SOS is evaluated against gross primary production (GPP)-retrieved SOS derived from a network of flux tower observations in boreal areas (a total of 81 site-years analyzed). Spatiotemporal relationships between SOS derived from PPI, EVI and NDVI are furthermore studied for different boreal land cover types and regions. The overall correlation between SOS derived from VIs and ground measurements was rather low, but PPI performed significantly better (r = 0.50, p < 0.01) than EVI and NDVI which both showed a very poor correlation (r = 0.11, p = 0. 16 and r = 0.08, p = 0.24). PPI, EVI and NDVI overall produce similar trends in SOS for the Northern Hemisphere showing an advance in SOS towards earlier dates (0.28, 0.23 and 0.26 days/year), but a pronounced difference in trend estimates between PPI and EVI/NDVI is observed for different land cover types. Deciduous needleleaf forest is characterized by the largest advance in SOS when considering all indices, yet PPI showed less dramatic changes as compared to EVI/NDVI (0.47 days/year as compared to 0.62 and 0.74). PPI SOS trends were found to be higher for deciduous broadleaf forests and savannas (0.54 and 0.56 days/year). Taken together, the findings of this study suggest improved performance of PPI over NDVI and EVI in retrieval of SOS in boreal regions and precautions must be taken when interpreting spatio-temporal patterns of SOS from the latter two indices. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) boreal zone of the Northern hemisphere delineation based on the Terrestrial Ecoregions of the World (TEOW) dataset [<a href="#B51-remotesensing-09-00485" class="html-bibr">51</a>]; (<b>B</b>) land cover classes based on The International Geosphere–Biosphere Programme (IGBP), derived from MODIS Land Cover Type products (MCD12Q1) (Land Processes Distributed Active Archive Center (LP DAAC), <a href="http://lpdaac.usgs.gov" target="_blank">lpdaac.usgs.gov</a>).</p>
Full article ">Figure 1 Cont.
<p>(<b>A</b>) boreal zone of the Northern hemisphere delineation based on the Terrestrial Ecoregions of the World (TEOW) dataset [<a href="#B51-remotesensing-09-00485" class="html-bibr">51</a>]; (<b>B</b>) land cover classes based on The International Geosphere–Biosphere Programme (IGBP), derived from MODIS Land Cover Type products (MCD12Q1) (Land Processes Distributed Active Archive Center (LP DAAC), <a href="http://lpdaac.usgs.gov" target="_blank">lpdaac.usgs.gov</a>).</p>
Full article ">Figure 2
<p>Vegetation index start of season (SOS) evaluation against gross primary production (GPP) SOS derived for the flux tower sites (<a href="#remotesensing-09-00485-f001" class="html-fig">Figure 1</a>) (<span class="html-italic">n</span> = 81) for (<b>A</b>) the Plant Phenology Index (PPI); (<b>B</b>) the Enhanced Vegetation Index (EVI); and (<b>C</b>) the Normalized Difference Vegetation Index (NDVI); (<b>D</b>) seasonality (2000–2015) of PPI, EVI and NDVI for the pixels used in evaluation against in situ GPP-SOS (average values for all sites shown). Time series is split into three periods for improved readability.</p>
Full article ">Figure 3
<p>(<b>A</b>) per pixel average PPI SOS (2000–2014). (<b>B</b>) relative difference in PPI and NDVI SOS and (<b>C</b>) relative difference in PPI and EVI SOS (2000–2014). Water bodies and pixels of forest loss are masked.</p>
Full article ">Figure 3 Cont.
<p>(<b>A</b>) per pixel average PPI SOS (2000–2014). (<b>B</b>) relative difference in PPI and NDVI SOS and (<b>C</b>) relative difference in PPI and EVI SOS (2000–2014). Water bodies and pixels of forest loss are masked.</p>
Full article ">Figure 4
<p>Per pixel trend of VI SOS (2000–2014). (<b>A</b>) PPI SOS significant pixels; (<b>B</b>) PPI SOS all pixels; (<b>C</b>) EVI SOS significant pixels; (<b>D</b>) EVI SOS all pixels; (<b>E</b>) NDVI SOS significant pixels; (<b>F</b>) NDVI SOS all pixels.</p>
Full article ">
4043 KiB  
Article
A Machine Learning Based Reconstruction Method for Satellite Remote Sensing of Soil Moisture Images with In Situ Observations
by Chenjie Xing, Nengcheng Chen, Xiang Zhang and Jianya Gong
Remote Sens. 2017, 9(5), 484; https://doi.org/10.3390/rs9050484 - 16 May 2017
Cited by 34 | Viewed by 8181
Abstract
Surface soil moisture is an important environment variable that is dominant in a variety of research and application areas. Acquiring spatiotemporal continuous soil moisture observations is therefore of great importance. Weather conditions can contaminate optical remote sensing observations on soil moisture, and the [...] Read more.
Surface soil moisture is an important environment variable that is dominant in a variety of research and application areas. Acquiring spatiotemporal continuous soil moisture observations is therefore of great importance. Weather conditions can contaminate optical remote sensing observations on soil moisture, and the absence of remote sensors causes gaps in regional soil moisture observation time series. Therefore, reconstruction is highly motivated to overcome such contamination and to fill in such gaps. In this paper, we propose a novel image reconstruction algorithm that improved upon the Satellite and In situ sensor Collaborated Reconstruction (SICR) algorithm provided by our previous publication. Taking artificial neural networks as a model, complex and highly variable relationships between in situ observations and remote sensing soil moisture is better projected. With historical data for the network training, feedforward neural networks (FNNs) project in situ soil moisture to remote sensing soil moisture at better performances than conventional models. Consequently, regional soil moisture observations can be reconstructed under full cloud contamination or under a total absence of remote sensors. Experiments confirmed better reconstruction accuracy and precision with this improvement than with SICR. The new algorithm enhances the temporal resolution of high spatial resolution remote sensing regional soil moisture observations with good quality and can benefit multiple soil moisture-based applications and research. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow differences between Neu-SICR (left) and SICR (right). The first stage (upper part) of the recovery in SICR is innovated in Neu-SICR, while the second to fourth stages (lower part) are kept original.</p>
Full article ">Figure 2
<p>Feedforward neural network as C1 pixel recovery model. Circles represent neurons in the FNN, and arrows represent weighted edges between the neurons. Arrow direction shows the data flow direction. SMi is the in situ soil moisture value from a C1 pixel, while SMr is the recovered soil moisture value for this C1 pixel. This figure shows a C1 pixel recovery model with one hidden layer of 6 neurons.</p>
Full article ">Figure 3
<p>True remotely sensed soil moisture and recovered soil moisture with respect to in situ observations. The horizontal axis is the in situ soil moisture domain; the vertical axis is the C1 pixel value domain. Dashed line segments represent the C1 recovering models; gray circles are recovered pixel values; gray squares are real values acquired by GF-1 WFV; and crossing marks the recovered target value. (<b>a</b>) Soil moisture recovery curve of C1 on in situ observatory Wtars (No. 2053); (<b>b</b>) Soil moisture recovery curve of C1 on in situ observatory Hytop (No. 2054); (<b>c</b>) Soil moisture recovery curve of C1 on in situ observatory Hodges (No. 2055); (<b>d</b>) Soil moisture recovery curve of C1 on in situ observatory Stanley Farm (No. 2056); (<b>e</b>) Soil moisture recovery curve of C1 on in situ observatory AAMU-JTG (No. 2057); (<b>f</b>) Soil moisture recovery curve of C1 on in situ observatory Hartselle Usda (No. 2058); (<b>g</b>) Soil moisture recovery curve of C1 on in situ observatory Newby Farm (No. 2059); (<b>h</b>) Soil moisture recovery curve of C1 on in situ observatory McAllister Farm (No. 2075); (<b>i</b>) Soil moisture recovery curve of C1 on in situ observatory Allen Farms (No. 2076); (<b>j</b>) Soil moisture recovery curve of C1 on in situ observatory Eastview Farm (No. 2077); (<b>k</b>) Soil moisture recovery curve of C1 on in situ observatory Bragg Farm (No. 2078).</p>
Full article ">Figure 4
<p>Recovery result of C1 and C2 pixels shown in the target image. The color bar on the right shows the corresponding soil moisture percentage. Bright pixels are recovered; dark blue pixels with zero values are the water area or are not yet recovered pixels.</p>
Full article ">Figure 5
<p>Recovered soil moisture image after C4 pixels were recovered. The color bar on the right shows the corresponding soil moisture percentage. Bright pixels are recovered; dark blue pixels with zero values are the water area.</p>
Full article ">Figure 6
<p>Histogram of the relative reconstruction error of the whole target image. This figure eliminated the water area and outliers described in <a href="#sec4dot2-remotesensing-09-00484" class="html-sec">Section 4.2</a>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Error histogram of the recovered target image; (<b>b</b>) Error histogram of recovery by the original SICR algorithm.</p>
Full article ">
6973 KiB  
Article
Automatic Color Correction for Multisource Remote Sensing Images with Wasserstein CNN
by Jiayi Guo, Zongxu Pan, Bin Lei and Chibiao Ding
Remote Sens. 2017, 9(5), 483; https://doi.org/10.3390/rs9050483 - 15 May 2017
Cited by 21 | Viewed by 8299
Abstract
In this paper a non-parametric model based on Wasserstein CNN is proposed for color correction. It is suitable for large-scale remote sensing image preprocessing from multiple sources under various viewing conditions, including illumination variances, atmosphere disturbances, and sensor and aspect angles. Color correction [...] Read more.
In this paper a non-parametric model based on Wasserstein CNN is proposed for color correction. It is suitable for large-scale remote sensing image preprocessing from multiple sources under various viewing conditions, including illumination variances, atmosphere disturbances, and sensor and aspect angles. Color correction aims to alter the color palette of an input image to a standard reference which does not suffer from the mentioned disturbances. Most of current methods highly depend on the similarity between the inputs and the references, with respect to both the contents and the conditions, such as illumination and atmosphere condition. Segmentation is usually necessary to alleviate the color leakage effect on the edges. Different from the previous studies, the proposed method matches the color distribution of the input dataset with the references in a probabilistic optimal transportation framework. Multi-scale features are extracted from the intermediate layers of the lightweight CNN model and are utilized to infer the undisturbed distribution. The Wasserstein distance is utilized to calculate the cost function to measure the discrepancy between two color distributions. The advantage of the method is that no registration or segmentation processes are needed, benefiting from the local texture processing potential of the CNN models. Experimental results demonstrate that the proposed method is effective when the input and reference images are of different sources, resolutions, and under different illumination and atmosphere conditions. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Color discrepancy in remote sensing images. (<b>a</b>,<b>b</b>) Digital Globe images on different dates from Google Earth; (<b>c</b>,<b>d</b>) Digital Globe (bottom, right) and NASA (National Aeronautics and Space Administration) Copernicus (<b>top</b>, <b>left</b>) images on the same date from Google Earth; (<b>e</b>) GF1 (Gaofen-1) images from different sensors, same area and date.</p>
Full article ">Figure 2
<p>Matching algorithms of “scheme A” take both input and reference in the form of histograms. As this scheme is not content related, two similar distributions with different contexts could be not be mapped to their corresponding reference with one unified mapping.</p>
Full article ">Figure 3
<p>Matching algorithms of “scheme B” take both input and reference in the form of images. Similar distributions could be mapped to different corresponding references, as the scheme is content based. However, the same grayscales could be mapped to different grayscales when they are in different contexts, violating Property 1.</p>
Full article ">Figure 4
<p>Matching algorithms of “scheme C” take images as inputs and histograms as references in the form of images. Similar distributions could be mapped to different corresponding references, as the scheme is content related.</p>
Full article ">Figure 5
<p>Calculation method of the Wasserstein distance between the inferred histograms and the ground-truth reference. STEP 1: stack the histograms on the frequency axis; STEP 2: subtract the stacked histograms, and integrate with respect to the cumulative frequency.</p>
Full article ">Figure 6
<p>Structure of the “fire module” in the Squeeze-net.</p>
Full article ">Figure 7
<p>Model structure of the proposed model.</p>
Full article ">Figure 8
<p>Color transforming curves in the random augmentation process.</p>
Full article ">Figure 9
<p>Results of matching the color palette of GF1 to GF2. Bars: histograms of input patches; solid lines with color: predicted histograms of our model; dashed lines in black: histograms of reference images; from top to bottom: histograms of images of the same area, but under different illumination and atmospheric conditions.</p>
Full article ">Figure 10
<p>Color matching results of GF1 and GF2. From top to bottom: satellite images of the same area, but under different illumination and atmospheric conditions; left: input images; middle: output images with the predicted color palette; right: reference images, only needed in the training process to calculate the loss function. The model is able to infer the corrected color palette based on the content of the input images in the absence of a reference, when the model is fully trained.</p>
Full article ">Figure 11
<p>Two one-dimensional uniform distributions.</p>
Full article ">Figure 12
<p>Comparisons between color matching methods.</p>
Full article ">Figure 13
<p>Boxplots of L1-norm distances between the processed images and the ground truth with respect to left: ORB; middle: SIFT, and right: BRISK feature descriptors. The distances represent the dissimilarity between the processed results and the ground truth (the smaller the better). There are five horizontal line segments in each patch, indicating five percentiles of the distances within the processed images by the corresponding method; from top to bottom: the maximum (worst) distance, the worst-25% distance, the median distance, the best-25% distance, and the minimum (best) distance.</p>
Full article ">
14035 KiB  
Article
Ground Ammonia Concentrations over China Derived from Satellite and Atmospheric Transport Modeling
by Lei Liu, Xiuying Zhang, Wen Xu, Xuejun Liu, Xuehe Lu, Shanqian Wang, Wuting Zhang and Limin Zhao
Remote Sens. 2017, 9(5), 467; https://doi.org/10.3390/rs9050467 - 15 May 2017
Cited by 34 | Viewed by 9456
Abstract
As a primary basic gas in the atmosphere, atmospheric ammonia (NH3) plays an important role in determining air quality, environmental degradation, and climate change. However, the limited ground observation currently presents a barrier to estimating ground NH3 concentrations on a [...] Read more.
As a primary basic gas in the atmosphere, atmospheric ammonia (NH3) plays an important role in determining air quality, environmental degradation, and climate change. However, the limited ground observation currently presents a barrier to estimating ground NH3 concentrations on a regional scale, thus preventing a full understanding of the atmospheric processes in which this trace gas is involved. This study estimated the ground NH3 concentrations over China, combining the Infrared Atmospheric Sounding Interferometer (IASI) satellite NH3 columns and NH3 profiles from an atmospheric chemistry transport model (CTM). The estimated ground NH3 concentrations showed agreement with the variability in annual ground NH3 measurements from the Chinese Nationwide Nitrogen Deposition Monitoring Network (NNDMN). Great spatial heterogeneity of ground NH3 concentrations was found across China, and high ground NH3 concentrations were found in Northern China, Southeastern China, and some areas in Xinjiang Province. The maximum ground NH3 concentrations over China occurred in summer, followed by spring, autumn, and winter seasons, which were in agreement with the seasonal patterns of NH3 emissions in China. This study suggested that a combination of NH3 profiles from CTMs and NH3 columns from satellite obtained reliable ground NH3 concentrations over China. Full article
(This article belongs to the Special Issue Remote Sensing of Atmospheric Pollution)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Spatial distribution of ground monitoring NH<sub>3</sub> sites in the Chinese Nationwide Nitrogen Deposition Monitoring Network (NNDMN).</p>
Full article ">Figure 2
<p>Schematic of the method to estimate the satellite-derived ground NH<sub>3</sub> concentrations.</p>
Full article ">Figure 3
<p>Spatial distribution of the relative error (<b>a</b>), correlation (<b>b</b>) and root-mean-square error (RMSE) (<b>c</b>) of the estimated ground NH<sub>3</sub> concentration (µg N m<sup>−3</sup>) at 44 NNDMN sites.</p>
Full article ">Figure 4
<p>Yearly comparisons between the estimated and measured ground NH<sub>3</sub> concentration (µg N m<sup>−3</sup>). (<b>a</b>) indicates the comparison between the measured ground NH<sub>3</sub> concentrations and the estimated ground NH<sub>3</sub> concentrations from MOZART at the lowest layer before applying the satellite data, while (<b>b</b>) represents the comparison between the measured and estimated ground NH<sub>3</sub> concentrations by applying the satellite data using the methods in <a href="#sec2dot4-remotesensing-09-00467" class="html-sec">Section 2.4</a>.</p>
Full article ">Figure 5
<p>Spatial distribution of the ground NH<sub>3</sub> concentration (µg N m<sup>−3</sup>). (<b>a</b>) represents the yearly estimated ground NH<sub>3</sub> concentrations; (<b>b</b>) denotes the percent farmland area; (<b>c</b>) denotes the Infrared Atmospheric Sounding Interferometer (IASI) NH<sub>3</sub> columns and (<b>d</b>) indicates the ratio of ground NH<sub>3</sub> concentration to NH<sub>3</sub> columns from MOZART.</p>
Full article ">Figure 6
<p>Seasonal patterns of ground NH<sub>3</sub> concentrations in China. (<b>a</b>) indicates the monthly variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>) in China; (<b>b</b>) represents the monthly variations of the total NH<sub>3</sub> emissions (Tg, 10<sup>12</sup> g) in China conducted by Kang et al. [<a href="#B36-remotesensing-09-00467" class="html-bibr">36</a>]; (<b>c</b>) shows the the monthly variations of the sum of fertilizer and livestock NH<sub>3</sub> emissions (Tg) in China conducted by Huang et al. [<a href="#B35-remotesensing-09-00467" class="html-bibr">35</a>] and (<b>d</b>) denotes the monthly variations of the fertilizer NH<sub>3</sub> emissions (Tg) in China conducted by Xu et al. [<a href="#B37-remotesensing-09-00467" class="html-bibr">37</a>].</p>
Full article ">Figure 7
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at five sites with best-simulated ground NH<sub>3</sub> concentrations from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013). The relationship between the ground NH<sub>3</sub> concentrations and precipitation (mm), humidity (%), and wind speed (m/s) at each site is provided in <a href="#remotesensing-09-00467-f011" class="html-fig">Figure A4</a>, <a href="#remotesensing-09-00467-f012" class="html-fig">Figure A5</a>, <a href="#remotesensing-09-00467-f013" class="html-fig">Figure A6</a>, <a href="#remotesensing-09-00467-f014" class="html-fig">Figure A7</a> and <a href="#remotesensing-09-00467-f015" class="html-fig">Figure A8</a>.</p>
Full article ">Figure 8
<p>Vertical NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>) simulated by Mozart at five locations in January 2013.</p>
Full article ">Figure 9
<p>A quick illustration of the site bias of ground NH<sub>3</sub> concentrations across China by interpolating the residuals between the measured and estimated using the inverse-distance-weighted (IDW) interpolation. The figures were generated using ArcGIS 12.0 software (<a href="https://www.arcgis.com/" target="_blank">https://www.arcgis.com/</a>).</p>
Full article ">Figure 10
<p>Relative error (%) of IASI NH<sub>3</sub> columns. (<b>a</b>) indicates the annual IASI NH<sub>3</sub> error (with a cloud coverage lower than 25%) averaged from 2008 to 2015; (<b>b</b>) indicates the averaged monthly relative error from 2008 to 2015 in different regions (every dot indicates the relative error at a month in a region); (<b>c</b>) indicates the temporal variations of relative error over China at a monthly scale.</p>
Full article ">Figure 11
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at GZL from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013).</p>
Full article ">Figure 12
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at TLF from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013).</p>
Full article ">Figure 13
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at CL from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013).</p>
Full article ">Figure 14
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at YPH from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013).</p>
Full article ">Figure 15
<p>The seasonal variations of ground NH<sub>3</sub> concentrations (µg N m<sup>−3</sup>), temperature (°C), precipitation (mm), humidity (%), and wind speed (m/s) at FYU from January 2010 to December 2013 (0–12, 2010; 13–24, 2011; 25–36, 2012; 37–48, 2013).</p>
Full article ">Figure 16
<p>(<b>a</b>,<b>b</b>) R<sup>2</sup> and RMSE (molec./cm<sup>2</sup>) for the Gaussian simulation of the NH<sub>3</sub> profiles (68~142°E, 5~55°N) in 2013.</p>
Full article ">
5825 KiB  
Article
Hyperspectral Target Detection via Adaptive Joint Sparse Representation and Multi-Task Learning with Locality Information
by Yuxiang Zhang, Ke Wu, Bo Du, Liangpei Zhang and Xiangyun Hu
Remote Sens. 2017, 9(5), 482; https://doi.org/10.3390/rs9050482 - 14 May 2017
Cited by 22 | Viewed by 7333
Abstract
Target detection from hyperspectral images is an important problem but encounters a critical challenge of simultaneously reducing spectral redundancy and preserving the discriminative information. Recently, the joint sparse representation and multi-task learning (JSR-MTL) approach was proposed to address the challenge. However, it does [...] Read more.
Target detection from hyperspectral images is an important problem but encounters a critical challenge of simultaneously reducing spectral redundancy and preserving the discriminative information. Recently, the joint sparse representation and multi-task learning (JSR-MTL) approach was proposed to address the challenge. However, it does not fully explore the prior class label information of the training samples and the difference between the target dictionary and background dictionary when constructing the model. Besides, there may exist estimation bias for the unknown coefficient matrix with the use of minimization which is usually inconsistent in variable selection. To address these problems, this paper proposes an adaptive joint sparse representation and multi-task learning detector with locality information (JSRMTL-ALI). The proposed method has the following capabilities: (1) it takes full advantage of the prior class label information to construct an adaptive joint sparse representation and multi-task learning model; (2) it explores the great difference between the target dictionary and background dictionary with different regularization strategies in order to better encode the task relatedness; (3) it applies locality information by imposing an iterative weight on the coefficient matrix in order to reduce the estimation bias. Extensive experiments were carried out on three hyperspectral images, and it was found that JSRMTL-ALI generally shows a better detection performance than the other target detection methods. Full article
(This article belongs to the Special Issue Learning to Understand Remote Sensing Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the band cross-grouping strategy for the multiple detection tasks. HSI = hyperspectral image.</p>
Full article ">Figure 2
<p>Schematic illustration of the adaptive joint sparse representation and multi-task learning detector with locality information (JSRMTL-ALI) algorithm.</p>
Full article ">Figure 3
<p>The AVIRIS dataset.</p>
Full article ">Figure 4
<p>The Indian dataset.</p>
Full article ">Figure 5
<p>The Cri dataset.</p>
Full article ">Figure 6
<p>Receiver operation characteristic (ROC) curves for the effectiveness investigation of JSRMTL-ALI model.</p>
Full article ">Figure 7
<p>Detection performance of JSRMTL-ALI versus the detection task number <math display="inline"> <semantics> <mi>K</mi> </semantics> </math>.</p>
Full article ">Figure 8
<p>Detection performance of JSRMTL-ALI versus the detection task number <math display="inline"> <semantics> <mi>ρ</mi> </semantics> </math>.</p>
Full article ">Figure 9
<p>Detection performance of JSRMTL-ALI versus the size of the outer window region (OWR).</p>
Full article ">Figure 10
<p>Detection performance of eight detectors for three datasets.</p>
Full article ">Figure 11
<p>The separability maps of eight detectors for three datasets.</p>
Full article ">Figure 12
<p>Two-dimensional plots of the detection map for the AVIRIS dataset.</p>
Full article ">Figure 13
<p>Two-dimensional plots of the detection map for the Indian dataset.</p>
Full article ">Figure 14
<p>Two-dimensional plots of the detection map for the Cri dataset.</p>
Full article ">
6719 KiB  
Article
Identifying the Lambertian Property of Ground Surfaces in the Thermal Infrared Region via Field Experiments
by Lili Tu, Zhihao Qin, Lechan Yang, Fei Wang, Jun Geng and Shuhe Zhao
Remote Sens. 2017, 9(5), 481; https://doi.org/10.3390/rs9050481 - 14 May 2017
Cited by 14 | Viewed by 7022
Abstract
Lambertian surfaces represent an important assumption when constructing thermal radiance transfer equations for remote sensing observations of ground surface temperatures. We identify the properties of ground surfaces in thermal infrared regions as Lambertian surfaces via field experiments. Because Lambertian surfaces present homogeneous thermal [...] Read more.
Lambertian surfaces represent an important assumption when constructing thermal radiance transfer equations for remote sensing observations of ground surface temperatures. We identify the properties of ground surfaces in thermal infrared regions as Lambertian surfaces via field experiments. Because Lambertian surfaces present homogeneous thermal emissions levels in hemispheric directions for a specific ground surface under specific kinetic temperatures and emissions, we conducted a series of field experiments to illustrate the properties of such ground surfaces. Four typical ground surfaces were selected for the experiments to observe thermal emissions: bare soil, grass, water, and concrete. Radiance thermometers were used to observe ground emissions from seven directions: 30°, 45°, 60°, 90°, 120°, 135°, and 150°. Solar zenith angles were considered for the observation of ground emissions. Experiments were conducted in five different regions of China (Beijing, Nanjing, Xilinguole, Yongzhou, and Jiangmen) during both daytime and nighttime. To determine whether different observation angles have significantly different effects on radiance, statistical analyses (ANOVA and Friedman test) were conducted. Post hoc multiple comparison tests and pairwise multiple comparisons were also conducted to examine the various pairings of observation angles and to measure the radiance differences. Roughly half of the radiance groups of all observed sites were tested via an ANOVA, and the remaining groups with unequal variances were subjected to the Friedman test. The results indicate that statistically significant differences in the radiance levels occurred among the seven angles for almost all of the sites (39 of the 40 groups). The results of our experiments indicate that the selected ground surfaces, especially the grass and the bare soil, may not behave with Lambertian properties in the thermal infrared region. This is probably attributed to the roughness of the selected surface, because we found that roughness is an important factor affecting the observed magnitude of thermal emission from different directions of the ground surface under study. Therefore, whether or not a terrestrial surface can be assumed to be a Lambertian surface should be based on their geometric structure. When the surface is relatively smooth, we can say that it is close to the Lambertian property in thermal emission. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Emission of the ground and its measurement in a specific direction. (<b>a</b>) Thermal radiation emitted from the ground into a hemispheric space. (<b>b</b>) Measurement of ground thermal radiance at a specific direction with the effects from atmospheric radiance.</p>
Full article ">Figure 2
<p>Observation framework for the experiment.</p>
Full article ">Figure 3
<p>Radiant surface temperature (RST) observation experiments over four typical surfaces: (<b>a</b>) bare soil in Nanjing, 2 August 2015; (<b>b</b>) concrete in Jiangmen, 29 October 2015; (<b>c</b>) grass in Xilinguole, 31 August 2015; and (<b>d</b>) water in Yongzhou, 25 October 2015.</p>
Full article ">Figure 4
<p>Responses of the seven thermometers to the same water surface under both high (<b>a</b>) and low (<b>b</b>) temperature conditions for the determination of calibration constants.</p>
Full article ">Figure 5
<p>Results of the post hoc multiple comparison tests of seven angles ((<b>a</b>) The least significant difference (LSD) test was used for the one-way ANOVA and Tamhanes T2 test was used for the Friedman test; (<b>b</b>) Tukey’s test was used for the one-way ANOVA and Tamhanes T2 test was used for the Friedman test; numbers shown in the grids mark the times at which significant difference occurred for the corresponding two angles).</p>
Full article ">Figure 6
<p>Radiance changes for the seven directions at all sites in the daytime over 30 min of observation (because incorrect data during the period of observation were deleted, the records for three sites are smaller than those of the others).</p>
Full article ">Figure 7
<p>Radiance changes in seven directions for all of the sites at night over 30 min of observation (because incorrect data during the period of observation were deleted, the records for two sites are smaller than the others).</p>
Full article ">Figure 8
<p>Average kinetic surface temperature (KST) in seven directions for the five sites.</p>
Full article ">Figure 9
<p>Bare soil and grass sites at the five stations.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop