[go: up one dir, main page]

Next Issue
Volume 11, December-1
Previous Issue
Volume 11, November-1
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 11, Issue 22 (November-2 2019) – 124 articles

Cover Story (view full-size image): In this study, we developed land surface directional reflectance and albedo products from GOES-R ABI data using an optimization-based algorithm, which was also tested with data from the Himawari AHI. The surface anisotropy model parameters and aerosol optical depth were estimated simultaneously, and then surface albedo and reflectance were calculated. Validations against ground measurements and some existing satellite products showed good agreement, with bias values of −0.001 (ABI) and 0.020 (AHI) and RMSEs less than 0.065 for the hourly albedo estimation, and with RMSEs less than 0.042 (ABI) and 0.039 (AHI) for the hourly reflectance estimation. In conclusion, the proposed albedo and reflectance estimation can satisfy the NOAA accuracy requirements for operational climate and meteorological applications. View this paper.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
24 pages, 16384 KiB  
Article
Urban Land Use and Land Cover Classification Using Multisource Remote Sensing Images and Social Media Data
by Yan Shi, Zhixin Qi, Xiaoping Liu, Ning Niu and Hui Zhang
Remote Sens. 2019, 11(22), 2719; https://doi.org/10.3390/rs11222719 - 19 Nov 2019
Cited by 54 | Viewed by 8220
Abstract
Land use and land cover (LULC) are diverse and complex in urban areas. Remotely sensed images are commonly used for land cover classification but hardly identifies urban land use and functional areas because of the semantic gap (i.e., different definitions of similar or [...] Read more.
Land use and land cover (LULC) are diverse and complex in urban areas. Remotely sensed images are commonly used for land cover classification but hardly identifies urban land use and functional areas because of the semantic gap (i.e., different definitions of similar or identical buildings). Social media data, “marks” left by people using mobile phones, have great potential to overcome this semantic gap. Multisource remote sensing data are also expected to be useful in distinguishing different LULC types. This study examined the capability of combined multisource remote sensing images and social media data in urban LULC classification. Multisource remote sensing images included a Chinese ZiYuan-3 (ZY-3) high-resolution image, a Landsat 8 Operational Land Imager (OLI) multispectral image, and a Sentinel-1A synthetic aperture radar (SAR) image. Social media data consisted of the hourly spatial distribution of WeChat users, which is a ubiquitous messaging and payment platform in China. LULC was classified into 10 types, namely, vegetation, bare land, road, water, urban village, greenhouses, residential, commercial, industrial, and educational buildings. A method that integrates object-based image analysis, decision trees, and random forests was used for LULC classification. The overall accuracy and kappa value attained by the combination of multisource remote sensing images and WeChat data were 87.55% and 0.84, respectively. They further improved to 91.55% and 0.89, respectively, by integrating the textural and spatial features extracted from the ZY-3 image. The ZY-3 high-resolution image was essential for urban LULC classification because it is necessary for the accurate delineation of land parcels. The addition of Landsat 8 OLI, Sentinel-1A SAR, or WeChat data also made an irreplaceable contribution to the classification of different LULC types. The Landsat 8 OLI image helped distinguish between the urban village, residential buildings, commercial buildings, and roads, while the Sentinel-1A SAR data reduced the confusion between commercial buildings, greenhouses, and water. Rendering the spatial and temporal dynamics of population density, the WeChat data improved the classification accuracies of an urban village, greenhouses, and commercial buildings. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area for urban land use and land cover (LULC) classification.</p>
Full article ">Figure 2
<p>Multisource remote sensing images and WeChat data for urban LULC classification. (<b>a</b>) ZY-3 high-resolution image. (<b>b</b>) Landsat 8 OLI multispectral image. (<b>c</b>) Sentinel-1A SAR image. (<b>d</b>) WeChat user density data.</p>
Full article ">Figure 3
<p>Hourly WeChat user density images created using the kernel density analysis. (<b>a</b>) 1 am on Monday. (<b>b</b>) 9 am on Monday. (<b>c</b>) 3 pm on Monday. (<b>d</b>) 6 pm on Monday.</p>
Full article ">Figure 4
<p>Typical LULC classes in the study area. (<b>a</b>) educational building. (<b>b</b>) commercial building. (<b>c</b>) greenhouse. (<b>d</b>) residential building. (<b>e</b>) industrial building. (<b>f</b>) urban village. (<b>g</b>) bare land. (<b>h</b>) road. (<b>i</b>) water and vegetation.</p>
Full article ">Figure 5
<p>Samples collected across typical LULC classes in the study area.</p>
Full article ">Figure 6
<p>Scheme of LULC classification using multisource remote sensing images and WeChat data.</p>
Full article ">Figure 7
<p>Segmentation of the ZiYuan-3 (ZY-3) image with different scale parameters.</p>
Full article ">Figure 8
<p>Decision tree constructed for the LULC classification using the multisource remote sensing images and WeChat data.</p>
Full article ">Figure 9
<p>(<b>a</b>) overall accuracies and (<b>b</b>) kappa values achieved by the different combinations of ZY-3, Landsat 8, Sentinel-1A SAR, and WeChat data (Note: Z: ZY-3; L: Landsat 8; S: Sentinel-1A SAR; W: WeChat; GT: Geometry and textural features of ZY-3).</p>
Full article ">Figure 10
<p>(<b>a</b>) producer’s accuracy and (<b>b</b>) user’s accuracy of each LULC class attained using ZY-3, Landsat 8, Sentinel-1A SAR, WeChat, and geometry and texture feature of ZY-3 image.</p>
Full article ">Figure 11
<p>LULC classification results obtained using (<b>a</b>) ZY-3 image, (<b>b</b>) ZY-3 and Landsat images, (<b>c</b>) ZY-3 image, Sentinel-1 A SAR image, and WeChat data, and (<b>d</b>) ZY-3 image, Sentinel-1 A SAR image, WeChat data, and Landsat 8 OLI image.</p>
Full article ">Figure 12
<p>Improvements in the producer’s and user’s accuracies of various LULC classes made by the addition of Landsat 8 image in comparison with the use of (<b>a</b>) ZY-3 image and (<b>b</b>) ZY-3 image, Sentinel-1 A SAR image, and WeChat data.</p>
Full article ">Figure 13
<p>LULC classification results obtained using (<b>a</b>) ZY-3 image, (<b>b</b>) ZY-3 image and WeChat data, (<b>c</b>) ZY-3 and Landsat 8 OLI images, (<b>d</b>) ZY-3 image, Landsat 8 OLI image, and WeChat data, (<b>e</b>) ZY-3 and Sentinel-1 A SAR images, (<b>f</b>) ZY-3 image, Sentinel-1 A SAR image, and WeChat data, (<b>g</b>) ZY-3, Landsat 8 OLI, and Sentinel-1 A SAR images, (<b>h</b>) ZY-3 image, Landsat 8 OLI image, Sentinel-1 A SAR image, and WeChat data.</p>
Full article ">Figure 14
<p>Improvements in the producer’s and user’s accuracies of commercial building, greenhouse, and urban village made by the addition of WeChat data in comparison with the use of (<b>a</b>) ZY-3 image, (<b>b</b>) ZY-3 and Landsat 8 OLI images, (<b>c</b>) ZY-3 and Sentinel-1 A SAR images, and (<b>d</b>) ZY-3, Landsat 8 OLI, and Sentinel-1 A SAR images.</p>
Full article ">Figure 15
<p>LULC classification results obtained using (<b>a</b>) ZY-3 image, (<b>b</b>) ZY-3 and Sentinel-1 A SAR images, (<b>c</b>) ZY-3 and Landsat 8 OLI images, (<b>d</b>) ZY-3, Landsat 8 OLI, and Sentinel-1 A SAR images, (<b>e</b>) ZY-3 image and WeChat data, (<b>f</b>) ZY-3 image, WeChat data, and Sentinel-1 A SAR image, (<b>g</b>) ZY-3 image, Landsat 8 OLI image, and WeChat data, (<b>h</b>) ZY-3 image, Landsat 8 OLI image, WeChat data, and Sentinel-1 A SAR image.</p>
Full article ">Figure 16
<p>Improvements in the producer’s and user’s accuracies of water, commercial building, and greenhouse made by the addition of Sentinel-1 A SAR image in comparison with the use of (<b>a</b>) ZY-3 image, (<b>b</b>) ZY-3 and Landsat 8 OLI images, (<b>c</b>) ZY-3 image and WeChat data, and (<b>d</b>) ZY-3 image, Landsat 8 OLI image, and WeChat data.</p>
Full article ">Figure 17
<p>Temporal variation in WeChat user density over a week in (<b>a</b>) greenhouse, (<b>b</b>) water, (<b>c</b>) road, (<b>d</b>) vegetation, (<b>e</b>) urban village, (<b>f</b>) residential buildings, (<b>g</b>) educational buildings, (<b>h</b>) industrial buildings, (<b>i</b>) commercial buildings, and (<b>j</b>) bare land.</p>
Full article ">Figure 18
<p>(<b>a1,b1,c1</b>) ZY-3 images; (<b>a2,b2,c2</b>) LULC classification results obtained using ZY-3, Landsat 8, and Sentinel-1A SAR; (<b>a3,b3,c3</b>) LULC classification results obtained using ZY-3, Landsat 8, Sentinel-1A SAR, and WeChat.</p>
Full article ">Figure 19
<p>(<b>a1,b1</b>) ZY-3 images; (<b>a2,b2</b>) LULC classification results obtained using ZY-3, Landsat 8, and WeChat; (<b>a3,b3</b>) LULC classification results obtained using ZY-3, Landsat 8, Sentinel-1A SAR, and WeChat.</p>
Full article ">
20 pages, 2816 KiB  
Article
Fully Dense Multiscale Fusion Network for Hyperspectral Image Classification
by Zhe Meng, Lingling Li, Licheng Jiao, Zhixi Feng, Xu Tang and Miaomiao Liang
Remote Sens. 2019, 11(22), 2718; https://doi.org/10.3390/rs11222718 - 19 Nov 2019
Cited by 44 | Viewed by 4486
Abstract
The convolutional neural network (CNN) can automatically extract hierarchical feature representations from raw data and has recently achieved great success in the classification of hyperspectral images (HSIs). However, most CNN based methods used in HSI classification neglect adequately utilizing the strong complementary yet [...] Read more.
The convolutional neural network (CNN) can automatically extract hierarchical feature representations from raw data and has recently achieved great success in the classification of hyperspectral images (HSIs). However, most CNN based methods used in HSI classification neglect adequately utilizing the strong complementary yet correlated information from each convolutional layer and only employ the last convolutional layer features for classification. In this paper, we propose a novel fully dense multiscale fusion network (FDMFN) that takes full advantage of the hierarchical features from all the convolutional layers for HSI classification. In the proposed network, shortcut connections are introduced between any two layers in a feed-forward manner, enabling features learned by each layer to be accessed by all subsequent layers. This fully dense connectivity pattern achieves comprehensive feature reuse and enforces discriminative feature learning. In addition, various spectral-spatial features with multiple scales from all convolutional layers are fused to extract more discriminative features for HSI classification. Experimental results on three widely used hyperspectral scenes demonstrate that the proposed FDMFN can achieve better classification performance in comparison with several state-of-the-art approaches. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Typical residual block architecture.</p>
Full article ">Figure 2
<p>Flowchart of image classification based on DenseNet [<a href="#B48-remotesensing-11-02718" class="html-bibr">48</a>]. Note that only layers within each block are densely connected, presenting a local dense connectivity pattern.</p>
Full article ">Figure 3
<p>Architecture of a dense block. BN, batch normalization.</p>
Full article ">Figure 4
<p>Fully dense connectivity pattern. For simplicity, each scale has two Conv layers. In addition, multiscale feature maps extracted from an HSI patch are illustrated.</p>
Full article ">Figure 5
<p>Framework of the proposed fully dense multiscale fusion network (FDMFN) for HSI classification. For convenience, the BN, ReLU layers that precede the global average pooling (GAP) layer are not given.</p>
Full article ">Figure 6
<p>Influences of (<b>a</b>) the number of Conv layers, (<b>b</b>) the growth rate <span class="html-italic">k</span>, and (<b>c</b>) the size of image patches on the classification performance (average accuracy (AA) in %) of the proposed FDMFN.</p>
Full article ">Figure 7
<p>Classification maps and overall classification accuracies for the IP dataset. DFFN, deep feature fusion network; FCLFN, fully convolutional layer fusion network.</p>
Full article ">Figure 8
<p>Classification maps and overall classification accuracies for the UH dataset.</p>
Full article ">Figure 9
<p>Classification maps and overall classification accuracies for the KSC dataset.</p>
Full article ">Figure 10
<p>Overall accuracy (OA) of different methods when using different ratios of labeled data for training on the (<b>a</b>) IP, (<b>b</b>) UH, and (<b>c</b>) KSC datasets.</p>
Full article ">
18 pages, 2690 KiB  
Article
Inter-Calibration of the OSIRIS-REx NavCams with Earth-Viewing Imagers
by David Doelling, Konstantin Khlopenkov, Conor Haney, Rajendra Bhatt, Brent Bos, Benjamin Scarino, Arun Gopalan and Dante S. Lauretta
Remote Sens. 2019, 11(22), 2717; https://doi.org/10.3390/rs11222717 - 19 Nov 2019
Cited by 5 | Viewed by 3554
Abstract
The Earth-viewed images acquired by the space probe OSIRIS-REx during its Earth gravity assist flyby maneuver on 22 September 2017 provided an opportunity to radiometrically calibrate the onboard NavCam imagers. Spatially-, temporally-, and angularly-matched radiances from the Earth viewing GOES-15 and DSCOVR-EPIC imagers [...] Read more.
The Earth-viewed images acquired by the space probe OSIRIS-REx during its Earth gravity assist flyby maneuver on 22 September 2017 provided an opportunity to radiometrically calibrate the onboard NavCam imagers. Spatially-, temporally-, and angularly-matched radiances from the Earth viewing GOES-15 and DSCOVR-EPIC imagers were used as references for deriving the calibration gain of the NavCam sensors. An optimized all-sky tropical ocean ray-matching (ATO-RM) calibration approach that accounts for the spectral band differences, navigation errors, and angular geometry differences between NavCam and the reference imagers is formulated in this paper. Prior to ray-matching, the GOES-15 and EPIC pixel level radiances were mapped into the NavCam field of view. The NavCam 1 ATO-RM gain is found to be 9.874 × 10−2 Wm−2sr−1µm−1DN−1 with an uncertainty of 3.7%. The ATO-RM approach predicted an offset of 164, which is close to the true space DN of 170. The pre-launch NavCam 1 and 2 gains were compared with the ATO-RM gain and were found to be within 2.1% and 2.8%, respectively, suggesting that sensor performance is stable in space. The ATO-RM calibration was found to be consistent within 3.9% over a factor of ±2 NavCam 2 exposure times. This approach can easily be adapted to inter-calibrate other space probe cameras given the current constellation of geostationary imagers. Full article
(This article belongs to the Special Issue Remote Sensing: 10th Anniversary)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The OSIRIS-REx NavCam 2 22 September 2017 22:38:27 GMT image. (<b>b</b>) The same as (a) except with Earth centered and zoomed at 425 × 425 pixels. The NavCam-relative Earth North and South Poles are located along the upper left and lower right corner diagonal. (<b>c</b>) The 22:30 GMT GOES-15 NH imager visible channel. (<b>d</b>) The 22:41:14 GMT DSCOVR-EPIC band 7 (0.68 µm) image.</p>
Full article ">Figure 2
<p>The NavCam [<a href="#B17-remotesensing-11-02717" class="html-bibr">17</a>], GOES-15 imager band 1 [<a href="#B34-remotesensing-11-02717" class="html-bibr">34</a>] and EPIC band 7 [<a href="#B35-remotesensing-11-02717" class="html-bibr">35</a>] normalized spectral response functions.</p>
Full article ">Figure 3
<p>The NavCam 2 22:38:27 GMT (<b>a</b>) viewing zenith angle (VZA) and (<b>b</b>) relative azimuth angle (RAA) images corresponding to <a href="#remotesensing-11-02717-f001" class="html-fig">Figure 1</a>b. The 22:41:14 GMT EPIC (<b>c</b>) VZA and (<b>d</b>) RAA images corresponding to <a href="#remotesensing-11-02717-f001" class="html-fig">Figure 1</a>c. The 22:30 GMT GOES-15 (<b>e</b>) VZA and (<b>f</b>) RAA NH images corresponding to <a href="#remotesensing-11-02717-f001" class="html-fig">Figure 1</a>d). The color bar units are in degrees.</p>
Full article ">Figure 4
<p>(<b>a</b>) The 22:38:27 GMT NavCam 2 minus 22:41:14 GMT EPIC VZA difference in degrees. (<b>b</b>) The 22:38:27 GMT NavCam 2 minus 22:30 GMT GOES-15 VZA difference in degrees. (<b>c</b>,<b>d</b>) Same as (a) and (b) except with RAA difference in degrees. (<b>e</b>) NavCam 2 image pixel DN where both NavCam 2 and EPIC field of views are limited to a VZA&lt; 40°. (<b>f</b>) Same as (e) except for NavCam 2 and GOES-15. (<b>g</b>) NavCam 2 image pixel DN where both NavCam 2 and EPIC fields of view are ray-matched according to <a href="#remotesensing-11-02717-t001" class="html-table">Table 1</a> row 13 under the GAM <span class="html-italic">top half</span>. (<b>h</b>) NavCam 2 image pixel DN where both NavCam 2 and GOES-15 fields of view are ray-matched according to <a href="#remotesensing-11-02717-t001" class="html-table">Table 1</a> row 14 under the GAM <span class="html-italic">top half</span>.</p>
Full article ">Figure 5
<p>(<b>a</b>) The 22:38:27 GMT NavCam 2 pixel DN and 22:41:14 GMT EPIC pixel radiance collocated pairs displayed as a relative density plot. The color bar units are in relative frequency. (<b>b</b>) The 22:38:27 GMT NavCam 2 pixel DN and 22:30 GMT GOES-15 pixel radiance collocated pairs displayed as a relative density plot. (<b>c</b>,<b>d</b>) Same as (a) and (b) except for also applying VZA &lt; 40.0 and σ<sub>2D</sub> &lt; 0.7 thresholds. (<b>e</b>,<b>f</b>) Same as (c) and (d) except including the σ<sub>2D</sub> &lt; 0.2 threshold. The <span class="html-italic">slope, offset</span>, and <span class="html-italic">stderr%</span> are the orthogonal linear regression gain, x-offset in DN, and associated standard error in percent, respectively. <span class="html-italic">For(170)</span> is the force fit gain or the linear regression through the NavCam 2 space DN of 170. <span class="html-italic">Num</span> is the number pairs for the month.</p>
Full article ">Figure 6
<p>(<b>a</b>) The 22:38:27 GMT NavCam 2 pixel DN and 22:41:14 GMT EPIC pixel radiance ATO-RM using GAM. (<b>b</b>) The 22:38:27 GMT NavCam 2 pixel DN and 22:30 GMT GOES-15 pixel radiance ATO-RM using GAM. (<b>c</b>,<b>d</b>) Same as (a) and (b) except for without applying SBAF. (<b>e</b>,<b>f</b>) Same as (a) and (b) except excluding radiances &lt; 200 Wm<sup>−2</sup>sr<sup>−1</sup>µm<sup>−1</sup>.</p>
Full article ">Figure 7
<p>The (<b>a</b>) 22:43:33 GMT and (<b>b</b>) 22:43:41 GMT NavCam 2 images. The sensor exposure time of (a) is half that of <a href="#remotesensing-11-02717-f001" class="html-fig">Figure 1</a>b and the exposure time of (b) is approximately twice that of <a href="#remotesensing-11-02717-f001" class="html-fig">Figure 1</a>b. The NavCam 2 DN at 4095 are considered saturated. The (<b>c</b>) 22:43:33 GMT and (<b>d</b>) 22:43:41 GMT NavCam 2 DN and 22:30 GMT GOES-15 pixel radiance collocated pairs displayed as a relative density plot. The color bar units are in relative frequency. (<b>e</b>,<b>f</b>) Same as (c) and (d) except for the ATO-RM pairs. See <a href="#remotesensing-11-02717-f005" class="html-fig">Figure 5</a> for explanation of the upper left statistics.</p>
Full article ">Figure 8
<p>(<b>a</b>) The spectral radiance output (black) from the integrating sphere and the QE plot (red). (<b>b</b>) Comparison of the clear-sky ocean, marine ice cloud, and ATO spectra.</p>
Full article ">
28 pages, 7938 KiB  
Article
Multimodal and Multi-Model Deep Fusion for Fine Classification of Regional Complex Landscape Areas Using ZiYuan-3 Imagery
by Xianju Li, Zhuang Tang, Weitao Chen and Lizhe Wang
Remote Sens. 2019, 11(22), 2716; https://doi.org/10.3390/rs11222716 - 19 Nov 2019
Cited by 27 | Viewed by 5714
Abstract
Land cover classification (LCC) of complex landscapes is attractive to the remote sensing community but poses great challenges. In complex open pit mining and agricultural development landscapes (CMALs), the landscape-specific characteristics limit the accuracy of LCC. The combination of traditional feature engineering and [...] Read more.
Land cover classification (LCC) of complex landscapes is attractive to the remote sensing community but poses great challenges. In complex open pit mining and agricultural development landscapes (CMALs), the landscape-specific characteristics limit the accuracy of LCC. The combination of traditional feature engineering and machine learning algorithms (MLAs) is not sufficient for LCC in CMALs. Deep belief network (DBN) methods achieved success in some remote sensing applications because of their excellent unsupervised learning ability in feature extraction. The usability of DBN has not been investigated in terms of LCC of complex landscapes and integrating multimodal inputs. A novel multimodal and multi-model deep fusion strategy based on DBN was developed and tested for fine LCC (FLCC) of CMALs in a 109.4 km2 area of Wuhan City, China. First, low-level and multimodal spectral–spatial and topographic features derived from ZiYuan-3 imagery were extracted and fused. The features were then input into a DBN for deep feature learning. The developed features were fed to random forest and support vector machine (SVM) algorithms for classification. Experiments were conducted that compared the deep features with the softmax function and low-level features with MLAs. Five groups of training, validation, and test sets were performed with some spatial auto-correlations. A spatially independent test set and generalized McNemar tests were also employed to assess the accuracy. The fused model of DBN-SVM achieved overall accuracies (OAs) of 94.74% ± 0.35% and 81.14% in FLCC and LCC, respectively, which significantly outperformed almost all other models. From this model, only three of the twenty land covers achieved OAs below 90%. In general, the developed model can contribute to FLCC and LCC in CMALs, and more deep learning algorithm-based models should be investigated in future for the application of FLCC and LCC in complex landscapes. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>ZiYuan-3 fused imagery and location of the study area and field samples [<a href="#B12-remotesensing-11-02716" class="html-bibr">12</a>].</p>
Full article ">Figure 2
<p>Schematic diagram of restricted Boltzmann machine [<a href="#B65-remotesensing-11-02716" class="html-bibr">65</a>].</p>
Full article ">Figure 3
<p>The structure of deep belief network. RBM: restricted Boltzmann machine.</p>
Full article ">Figure 4
<p>The framework of multimodal and multi-model deep fusion strategy. RGB: ZiYuan-3 true color image with red, green, and blue bands. NIRRG: ZiYuan-3 false color image with near-infrared, red, and green bands. DTM: digital terrain model. DBN: deep belief network. RBM: restricted Boltzmann machine.</p>
Full article ">Figure 5
<p>The schematic diagram of developing training, validation, and test samples and the spatially independent test samples (taking two classes as example).</p>
Full article ">Figure 6
<p>The used data sets and number of repeated runs in the process of feature selection (FS), parameter optimization, and accuracy assessment. DBN: deep belief network. RF: random forest. SVM: support vector machine. DBN-S: DBN with softmax classifier. DBN-RF: the fused model of DBN and RF. DBN-SVM: the fused model of DBN and SVM. MLAs: machine learning algorithms. FS-RF: RF with FS method. FS-SVM: SVM with FS method.</p>
Full article ">Figure 7
<p>The average and standard deviation values of mean decrease in accuracy for different features in feature subset 1 (<b>left</b>) and 2 (<b>right</b>). As for features’ name, see Li et al. [<a href="#B12-remotesensing-11-02716" class="html-bibr">12</a>] for detail.</p>
Full article ">Figure 8
<p>The average and standard deviation values of overall accuracy for validate set derived from different parameter combinations.</p>
Full article ">Figure 9
<p>The convergence process diagram of DBN-S.</p>
Full article ">Figure 10
<p>The predicted maps of fine land cover classification (LCC) based on the fused model of deep belief network and support vector machine (DBN-SVM), and LCC based on DBN-SVM and random forest with feature seleciton method [<a href="#B12-remotesensing-11-02716" class="html-bibr">12</a>] (from top to bottom).</p>
Full article ">Figure 11
<p>Different sampling designs of training and test samples (taking the proportion of 9:1 as example).</p>
Full article ">
26 pages, 7260 KiB  
Article
Unsupervised Clustering of Multi-Perspective 3D Point Cloud Data in Marshes: A Case Study
by Chuyen Nguyen, Michael J. Starek, Philippe Tissot and James Gibeaut
Remote Sens. 2019, 11(22), 2715; https://doi.org/10.3390/rs11222715 - 19 Nov 2019
Cited by 3 | Viewed by 3988
Abstract
Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these [...] Read more.
Dense three-dimensional (3D) point cloud data sets generated by Terrestrial Laser Scanning (TLS) and Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) photogrammetry have different characteristics and provide different representations of the underlying land cover. While there are differences, a common challenge associated with these technologies is how to best take advantage of these large data sets, often several hundred million points, to efficiently extract relevant information. Given their size and complexity, the data sets cannot be efficiently and consistently separated into homogeneous features without the use of automated segmentation algorithms. This research aims to evaluate the performance and generalizability of an unsupervised clustering method, originally developed for segmentation of TLS point cloud data in marshes, by extending it to UAS-SfM point clouds. The combination of two sets of features are extracted from both datasets: “core” features that can be extracted from any 3D point cloud and “sensor specific” features unique to the imaging modality. Comparisons of segmented results based on producer’s and user’s accuracies allow for identifying the advantages and limitations of each dataset and determining the generalization of the clustering method. The producer’s accuracies suggest that UAS-SfM (94.7%) better represents tidal flats, while TLS (99.5%) is slightly more suitable for vegetated areas. The users’ accuracies suggest that UAS-SfM outperforms TLS in vegetated areas with 98.6% of those points identified as vegetation actually falling in vegetated areas whereas TLS outperforms UAS-SfM in tidal flat areas with 99.2% user accuracy. Results demonstrate that the clustering method initially developed for TLS point cloud data transfers well to UAS-SfM point cloud data to enable consistent and accurate segmentation of marsh land cover via an unsupervised method. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area at Mustang Island Wetland Observatory. The circular inlay is the study location imaged from an Unmanned Aircraft System (UAS).</p>
Full article ">Figure 2
<p>Terrestrial Laser Scanning (TLS) merged point cloud colored by (<b>a</b>) elevation and (<b>b</b>) relative reflectance.</p>
Full article ">Figure 3
<p>Unmanned Aircraft System based Structure-from-Motion (UAS-SfM) point cloud colorized by the camera’s Red-Green-Blue (RGB) brightness values.</p>
Full article ">Figure 4
<p>Summary of the entire clustering framework.</p>
Full article ">Figure 5
<p>Davies Bouldin (DB) index values for TLS: DB mean with standard error and DB minimum.</p>
Full article ">Figure 6
<p>DB index values for UAS-SfM: DB mean with standard error and DB minimum.</p>
Full article ">Figure 7
<p>K-means clustering results of (<b>a</b>) TLS and (<b>b</b>) UAS-SfM point clouds. Colors represent the 4 different groups: tidal flat areas (orange), mangrove areas (green), upland areas (blue), low and high marshes (red). White areas show data gaps (areas with no points).</p>
Full article ">Figure 7 Cont.
<p>K-means clustering results of (<b>a</b>) TLS and (<b>b</b>) UAS-SfM point clouds. Colors represent the 4 different groups: tidal flat areas (orange), mangrove areas (green), upland areas (blue), low and high marshes (red). White areas show data gaps (areas with no points).</p>
Full article ">Figure 8
<p>Combined polygon for tidal flats (blue), vegetated areas (green), and exposed ground (yellow) used for accuracy assessments.</p>
Full article ">Figure 9
<p>Boxplot showing a comparison of standardized curvature 2 features (extracted from small voxels) for each cluster for (<b>a</b>) the TLS point cloud and (<b>b</b>) the UAS-SfM point cloud solutions with number of clusters equal to 6. The more compact the box and its fences are, the more similar the points within that cluster. The larger the box and its fences are, including potential outliers in red, the more heterogeneous the cluster. Clusters with overlapping boxes have curvature 2 distributions with similar central tendencies while non-overlapping boxes indicate a good differentiation in curvature 2 values between the clusters.</p>
Full article ">
20 pages, 13174 KiB  
Article
Continuous Monitoring of Differential Reflectivity Bias for C-Band Polarimetric Radar Using Online Solar Echoes in Volume Scans
by Zhigang Chu, Wei Liu, Guifu Zhang, Leilei Kou and Nan Li
Remote Sens. 2019, 11(22), 2714; https://doi.org/10.3390/rs11222714 - 19 Nov 2019
Cited by 4 | Viewed by 2775
Abstract
The measurement error of differential reflectivity (ZDR), especially systematic ZDR bias, is a fundamental issue for the application of polarimetric radar data. Several calibration methods have been proposed and applied to correct ZDR bias. However, recent studies have shown [...] Read more.
The measurement error of differential reflectivity (ZDR), especially systematic ZDR bias, is a fundamental issue for the application of polarimetric radar data. Several calibration methods have been proposed and applied to correct ZDR bias. However, recent studies have shown that ZDR bias is time-dependent and can be significantly different on two adjacent days. This means that the frequent monitoring of ZDR bias is necessary, which is difficult to achieve with existing methods. As radar sensitivity has gradually been enhanced, large amounts of online solar echoes have begun to be observed in volume-scan data. Online solar echoes have a high frequency, and a known theoretical value of ZDR (0 dB) could thus allow the continuous monitoring of ZDR bias. However, online solar echoes are also affected by low signal-to-noise ratio and precipitation attenuation for short-wavelength radar. In order to understand the variation of ZDR bias in a C-band polarimetric radar at the Nanjing University of Information Science and Technology (NUIST-CDP), we analyzed the characteristics of online solar echoes from this radar, including the daily frequency of occurrence, the distribution along the radial direction, precipitation attenuation, and fluctuation caused by noise. Then, an automatic method based on online solar echoes was proposed to monitor the daily ZDR bias of the NUIST-CDP. In the proposed method, a one-way differential attenuation correction for solar echoes and a maximum likelihood estimation using a Gaussian model were designed to estimate the optimal daily ZDR bias. The analysis of three months of data from the NUIST-CDP showed the following: (1) Online solar echoes occurred very frequently regardless of precipitation. Under the volume-scan mode, the average number of occurrences was 15 per day and the minimum number was seven. This high frequency could meet the requirements of continuous monitoring of the daily ZDR bias under precipitation and no-rain conditions. (2) The result from the proposed online solar method was significantly linearly correlated with that from the vertical pointing method (observation at an elevation angle of 90°), with a correlation coefficient of 0.61, suggesting that the proposed method is feasible. (3) The day-to-day variation in the ZDR bias was relatively large, and 32% of such variations exceeded 0.2 dB, meaning that a one-time calibration was not representative in time. Accordingly, continuous calibration will be necessary. (4) The ZDR bias was found to be largely influenced by the ambient temperature, with a large negative correlation between the ZDR bias and the temperature. Full article
(This article belongs to the Special Issue Radar Polarimetry—Applications in Remote Sensing of the Atmosphere)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Conceptual sketches of radar echoes from the sun (<b>a</b>) and from cloud/precipitation (<b>b</b>).</p>
Full article ">Figure 2
<p>Flow chart of the proposed automatic calibration method based on online solar echoes.</p>
Full article ">Figure 3
<p>Three cases of online solar echoes. (<b>a</b>) Under a clear sky; (<b>b</b>) under weak convective precipitation; (<b>c</b>) under stratiform precipitation. Online solar radials are indicated by red arrows. Clearly, online solar echoes do not only appear under clear sky conditions but sometimes coexist with weak precipitation.</p>
Full article ">Figure 4
<p>Frequency of occurrence of online solar echoes in the volume scan. (<b>a</b>) Boxplot and (<b>b</b>) frequency distribution with elevation angle. Clearly, online solar echoes occurred very frequently and appeared at a similar frequency at each elevation angle. This characteristic was favorable for the monitoring of daily differential reflectivity (Z<sub>DR</sub>) bias.</p>
Full article ">Figure 5
<p>Distribution of online solar echoes along the radial direction. (<b>a</b>) Range profiles. The characteristics of solar echoes were largely different from those of meteorological echoes. (<b>b</b>) The histogram of solar Z<sub>DR</sub>. Clearly, the probability density function of solar Z<sub>DR</sub> shows an approximately Gaussian distribution, which means that the method in <a href="#sec2dot2dot3-remotesensing-11-02714" class="html-sec">Section 2.2.3</a> could be used to obtain the optimal estimation.</p>
Full article ">Figure 6
<p>The relationship between the signal to noise ratio (SNR) and the standard deviation of Z<sub>DR</sub>. Thus, the variance caused by noise should be considered in the estimation of the solar Z<sub>DR</sub>.</p>
Full article ">Figure 7
<p>(<b>a</b>) Position and SNR of online solar signals relative to the center of the main radar beam. The intersection of the two dashed lines represents the center of the main beam. Clearly, the online solar signals rarely enter the antenna from the center of the main beam, and the SNR is larger closer to the center. (<b>b</b>) Distribution of the position of the online solar signals relative to the center of the main beam. Clearly, the majority of the solar signals came from the edges of the main beam, and the SNRs of the solar signals are lower than those of the offline solar signals.</p>
Full article ">Figure 8
<p>The daily Z<sub>DR</sub> bias of the online solar method and the vertical pointing method. The results show that the results of the proposed method were correlated with those of the traditional vertical pointing method.</p>
Full article ">Figure 9
<p>(<b>a</b>) Daily Z<sub>DR</sub> bias of NUIST-CDP and (<b>b</b>) daily mean temperature for three months.</p>
Full article ">Figure 9 Cont.
<p>(<b>a</b>) Daily Z<sub>DR</sub> bias of NUIST-CDP and (<b>b</b>) daily mean temperature for three months.</p>
Full article ">Figure 10
<p>Range height indicator (RHI) images of a case observed at 11:10 UTC on 28 June 2015. (<b>a</b>) Observed reflectivity factor. (<b>b</b>) Observed Z<sub>DR</sub>. (<b>c</b>) Corrected Z<sub>DR</sub> by the proposed online solar method. (<b>d</b>) Corrected Z<sub>DR</sub> by the vertical pointing method. Clearly, the corrected Z<sub>DR</sub> by the online solar method is closer to that corrected by the vertical pointing method (truth) than the uncorrected Z<sub>DR</sub>. The corrected Z<sub>DR</sub> at a height of about 6 km is reasonable, compared with the observed Z<sub>DR</sub>.</p>
Full article ">Figure 10 Cont.
<p>Range height indicator (RHI) images of a case observed at 11:10 UTC on 28 June 2015. (<b>a</b>) Observed reflectivity factor. (<b>b</b>) Observed Z<sub>DR</sub>. (<b>c</b>) Corrected Z<sub>DR</sub> by the proposed online solar method. (<b>d</b>) Corrected Z<sub>DR</sub> by the vertical pointing method. Clearly, the corrected Z<sub>DR</sub> by the online solar method is closer to that corrected by the vertical pointing method (truth) than the uncorrected Z<sub>DR</sub>. The corrected Z<sub>DR</sub> at a height of about 6 km is reasonable, compared with the observed Z<sub>DR</sub>.</p>
Full article ">Figure 11
<p>(<b>a</b>) Vertical profile of <a href="#remotesensing-11-02714-f010" class="html-fig">Figure 10</a>a. (<b>b</b>) Vertical profiles of <a href="#remotesensing-11-02714-f010" class="html-fig">Figure 10</a>b–d. The results suggest that the corrected Z<sub>DR</sub> by the proposed online solar method was reasonable and closer to that corrected by the vertical pointing method (truth) than the uncorrected Z<sub>DR</sub>.</p>
Full article ">Figure 12
<p>Plan position indicator (PPI) images of a case observed at 10:28 UTC on 06 July 2015 at the elevation angle of 7.5°. (<b>a</b>) Observed reflectivity factor. (<b>b</b>) Observed Z<sub>DR</sub>. (<b>c</b>) Corrected Z<sub>DR</sub> by the proposed online solar method. (<b>d</b>) Corrected Z<sub>DR</sub> by the vertical pointing method. Clearly, the corrected Z<sub>DR</sub> by the proposed online solar method is closer to that corrected by the vertical pointing method (truth) than the uncorrected Z<sub>DR</sub>.</p>
Full article ">Figure 13
<p>The observed relationship between daily Z<sub>DR</sub> bias and ambient temperature from June to August 2015. (<b>a</b>) Data for all days. (<b>b</b>) Data on the rainy days. The results show that the Z<sub>DR</sub> bias of NUIST-CDP was significantly correlated with the ambient temperature, with a correlation coefficient of about –0.6.</p>
Full article ">Figure 14
<p>Distribution of the random error of Z<sub>DR</sub> from NUIST-CDP based on an analysis of a case of uniform snowfall.</p>
Full article ">Figure 15
<p>(<b>a</b>) Distribution of the daily Z<sub>DR</sub> bias of NUIST-CDP. (<b>b</b>) Distribution of the day-to-day difference in the Z<sub>DR</sub> bias. The results show that 32% of the day-to-day difference exceeded 0.2 dB, meaning that continuous calibration is highly necessary for NUIST-CDP.</p>
Full article ">Figure 16
<p>(<b>a</b>) Distribution of the daily Z<sub>DR</sub> bias on rainy days and (<b>b</b>) on no-rain days.</p>
Full article ">
14 pages, 5174 KiB  
Article
Deep Learning-Generated Nighttime Reflectance and Daytime Radiance of the Midwave Infrared Band of a Geostationary Satellite
by Yerin Kim and Sungwook Hong
Remote Sens. 2019, 11(22), 2713; https://doi.org/10.3390/rs11222713 - 19 Nov 2019
Cited by 18 | Viewed by 4079
Abstract
Midwave infrared (MWIR) band of 3.75 μm is important in satellite remote sensing in many applications. This band observes daytime reflectance and nighttime radiance according to the Earth’s and the Sun’s effects. This study presents an algorithm to generate no-present nighttime reflectance and [...] Read more.
Midwave infrared (MWIR) band of 3.75 μm is important in satellite remote sensing in many applications. This band observes daytime reflectance and nighttime radiance according to the Earth’s and the Sun’s effects. This study presents an algorithm to generate no-present nighttime reflectance and daytime radiance at MWIR band of satellite observation by adopting the conditional generative adversarial nets (CGAN) model. We used the daytime reflectance and nighttime radiance data in the MWIR band of the meteoritical imager (MI) onboard the Communication, Ocean and Meteorological Satellite (COMS), as well as in the longwave infrared (LWIR; 10.8 μm) band of the COMS/MI sensor, from 1 January to 31 December 2017. This model was trained in a size of 1024 × 1024 pixels in the digital number (DN) from 0 to 255 converted from reflectance and radiance with a dataset of 256 images, and validated with a dataset of 107 images. Our results show a high statistical accuracy (bias = 3.539, root-mean-square-error (RMSE) = 8.924, and correlation coefficient (CC) = 0.922 for daytime reflectance; bias = 0.006, RMSE = 5.842, and CC = 0.995 for nighttime radiance) between the COMS MWIR observation and artificial intelligence (AI)-generated MWIR outputs. Consequently, our findings from the real MWIR observations could be used for identification of fog/low cloud, fire/hot-spot, volcanic eruption/ash, snow and ice, low-level atmospheric vector winds, urban heat islands, and clouds. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Examples of (<b>a</b>) reflectance (daytime, 2017.05.01 04:00 UTC (13:00 Korean Standard Time (KST))) and (<b>b</b>) radiance (night time, 2017.05.01 16:00 UTC (2017.05.02 01:00 KST)) at the 3.75 μm band of the Communication, Ocean and Meteorological Satellite (COMS).</p>
Full article ">Figure 2
<p>(<b>a</b>) Midwave infrared (MWIR) reflectance, (<b>b</b>) water vapor (WV) radiance, (<b>c</b>) IR1 radiance, and (<b>d</b>) IR2 radiance observed from COMS/MI on 1 May 2017, 04:00 UTC (13:00 KST, daytime).</p>
Full article ">Figure 2 Cont.
<p>(<b>a</b>) Midwave infrared (MWIR) reflectance, (<b>b</b>) water vapor (WV) radiance, (<b>c</b>) IR1 radiance, and (<b>d</b>) IR2 radiance observed from COMS/MI on 1 May 2017, 04:00 UTC (13:00 KST, daytime).</p>
Full article ">Figure 3
<p>Radiances at (<b>a</b>) MWIR, (<b>b</b>) WV, (<b>c</b>) IR1, and (<b>d</b>) IR2 bands of COMS/MI on 1 May 2017, 16:00 UTC (May 2, 2017, 01:00 KST, nighttime).</p>
Full article ">Figure 4
<p>CGAN-based model structure in this study.</p>
Full article ">Figure 5
<p>(<b>a</b>) Real COMS MWIR reflectance and (<b>b</b>) artificial intelligence (AI)-generated COMS MWIR reflectance on 25 January, 04:00 UTC (day time) with 40,000 iteration, (<b>c</b>) real COMS MWIR radiance, and (<b>d</b>) AI-generated COMS MWIR radiance on 25 January, 16:00 UTC (nighttime). (<b>e</b>) Difference between (<b>a</b>) and (<b>b</b>), and (<b>f</b>) difference between (<b>c</b>) and (<b>d</b>) are shown, respectively.</p>
Full article ">Figure 6
<p>Scatterplots and statistics of (<b>a</b>) AI-generated reflectance on 25 January, 04:00 UTC (daytime) and (<b>b</b>) AI-generated radiance on 25 January, 16:00 UTC (nighttime).</p>
Full article ">Figure 7
<p>Time series of the CC, bias, and RMSE between the COMS observation and (<b>a</b>) AI-generated reflectance, and (<b>b</b>) AI-generated radiance at every 25th day in each month from January to December 2017.</p>
Full article ">Figure 8
<p>(<b>a</b>) Time series of the real COMS MWIR reflectance, (<b>b</b>) the AI-generated COMS MWIR reflectance, and (<b>c</b>) the AI-generated COMS MWIR radiance during the twilight. The time period was January 15, 2017, 22:00 UTC to January 16, 2017, 01:00 UTC.</p>
Full article ">Figure 8 Cont.
<p>(<b>a</b>) Time series of the real COMS MWIR reflectance, (<b>b</b>) the AI-generated COMS MWIR reflectance, and (<b>c</b>) the AI-generated COMS MWIR radiance during the twilight. The time period was January 15, 2017, 22:00 UTC to January 16, 2017, 01:00 UTC.</p>
Full article ">
18 pages, 4396 KiB  
Article
Lunar Calibration for ASTER VNIR and TIR with Observations of the Moon in 2003 and 2017
by Toru Kouyama, Soushi Kato, Masakuni Kikuchi, Fumihiro Sakuma, Akira Miura, Tetsushi Tachikawa, Satoshi Tsuchida, Kenta Obata and Ryosuke Nakamura
Remote Sens. 2019, 11(22), 2712; https://doi.org/10.3390/rs11222712 - 19 Nov 2019
Cited by 12 | Viewed by 6597
Abstract
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), which is a multiband pushbroom sensor suite onboard Terra, has successfully provided valuable multiband images for approximately 20 years since Terra’s launch in 1999. Since the launch, sensitivity degradations in ASTER’s visible and near [...] Read more.
The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), which is a multiband pushbroom sensor suite onboard Terra, has successfully provided valuable multiband images for approximately 20 years since Terra’s launch in 1999. Since the launch, sensitivity degradations in ASTER’s visible and near infrared (VNIR) and thermal infrared (TIR) bands have been monitored and corrected with various calibration methods. However, a unignorable discrepancy between different calibration methods has been confirmed for the VNIR bands that should be assessed with another reliable calibration method. In April 2003 and August 2017, ASTER observed the Moon (and deepspace) for conducting a radiometric calibration (called as lunar calibration), which can measure the temporal variation in the sensor sensitivity of the VNIR bands enough accurately (better than 1%). From the lunar calibration, 3–6% sensitivity degradations were confirmed in the VNIR bands from 2003 to 2017. Since the measured degradations from the other methods showed different trends from the lunar calibration, the lunar calibration suggests a further improvement is needed for the VNIR calibration. Sensitivity degradations in the TIR bands were also confirmed by monitoring the variation in the number of saturated pixels, which were qualitatively consistent with the onboard and vicarious calibrations. Full article
(This article belongs to the Special Issue ASTER 20th Anniversary)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Images of the Moon obtained by ASTER VNIR Band 1 on 14 April 2003 and on 5 August 2017. Because the observation was conducted under oversampling conditions, the Moon’s shape is elongated in the image frame.</p>
Full article ">Figure 2
<p>Images of the Moon taken by the TIR Band 10 on (<b>a</b>) 14 April 2003 and (<b>b</b>) 5 August 2017, whose oversampling effects were not corrected. Black pixels in the Moon disks indicate saturation due to the high temperature of the Moon that exceeds the observable range of the band.</p>
Full article ">Figure 3
<p>(<b>a</b>) An example of offset patterns observed in the VNIR images (Band 1, 2003). The brightness level was stretched to enhance the offset. (<b>b</b>) Measured offsets for each pixel obtained from averaging the apparent brightness in deepspace regions (indicated by the rectangular regions shown in (<b>a</b>)). “Even” and “Odd” represent offset values for even pixels and odd pixels, respectively.</p>
Full article ">Figure 4
<p>(<b>a</b>) Observed images of the Moon by VNIR Band 1 (520–600 nm) on 14 April 2003 and 5 August 2017 whose oversampling effects were corrected, whereas the effects from the sensor sensitivity degradation were not corrected. (<b>b</b>) Simulated images of the Moon for the two observations using the SP model.</p>
Full article ">Figure 5
<p>Brightness ratios for the observed and simulated brightness of the Moon for Band 1 in (<b>a</b>) 2003 and (<b>b</b>) 2017. The regions surrounded by solid black lines (incident angle &lt; 60°, emission angle &lt; 45°) were used for evaluating the sensitivity degradation.</p>
Full article ">Figure 6
<p>Comparison of simulated and observed radiance for Band 1 in (<b>a</b>) 2003 and (<b>b</b>) 2017. The simulated radiance was measured from the SP model. The gray scale represents the normalized frequency, and the gray line represents a line with a slope of the mean ratio between simulated and observed brightness in each plot.</p>
Full article ">Figure 7
<p>Expected saturation brightness temperature for each detector of each TIR band in 2003 (gray dots) and in 2017 (black dots). The higher saturated brightness temperatures in 2017 are due to the TIR sensor’s sensitivity degradation that caused a wider dynamic range in the sensor. Note that the surface temperature of the Moon can exceed 120 °C (dashed line).</p>
Full article ">Figure 8
<p>Observed images of the Moon in (<b>a</b>) 2003 and (<b>b</b>) 2017. The oversampling effects were corrected by averaging pixels whose field of views overlapped. The black pixels in the lunar disk represent regions where at least one pixel in the original image was saturated when averaging.</p>
Full article ">Figure 9
<p>Comparison of the sensor sensitivity degradations in the VNIR bands from 2003 to 2017 measured from the onboard calibration (triangles), the vicarious calibration (white rectangles), and the lunar calibration (SP and ROLO models, crosses). The degradations measured from the inter-band calibration and the current RCC (version 4) are also shown (gray rectangles and black circles, respectively).</p>
Full article ">Figure A1
<p>Schematic views of TIR’s whiskbroom observations in (<b>a</b>) a nominal observation condition and (<b>b</b>) an oversampling condition. Note that each TIR band has 10 detectors and the actual TIR detectors are aligned with a stagger configuration.</p>
Full article ">Figure A2
<p>(<b>a</b>) An example of ellipse fitting for elongated lunar images (Band 1 for 2017). The standard deviation of residuals between the fitted ellipse and the locations of the extracted limb points in this example was 0.5 pixel whereas <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>a</mi> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>l</mi> <mi>e</mi> </msub> </mrow> </semantics></math> as shown in (<b>b</b>) were 413 and 1893 pixels, respectively.</p>
Full article ">Figure A3
<p>(<b>a</b>) Schematic view of correcting the oversampling effect for the VNIR bands in this study. (<b>b</b>) An example of calculating a weighted mean brightness for a pixel in a corrected image for the case 4 &lt; <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>o</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math> &lt; 5. Note that the distributions of <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mi>j</mi> </msub> </mrow> </semantics></math> are different for different lines in the corrected image due to the non-integer <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mrow> <mi>o</mi> <mi>s</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">
22 pages, 37980 KiB  
Article
Identifying Linear Traces of the Han Dynasty Great Wall in Dunhuang Using Gaofen-1 Satellite Remote Sensing Imagery and the Hough Transform
by Lei Luo, Nabil Bachagha, Ya Yao, Chuansheng Liu, Pilong Shi, Lanwei Zhu, Jie Shao and Xinyuan Wang
Remote Sens. 2019, 11(22), 2711; https://doi.org/10.3390/rs11222711 - 19 Nov 2019
Cited by 19 | Viewed by 5221
Abstract
The Han Dynasty Great Wall (GH), one of the largest and most significant ancient defense projects in the whole of northern China, has been studied increasingly not only because it provides important information about the diplomatic and military strategies of the Han Empire [...] Read more.
The Han Dynasty Great Wall (GH), one of the largest and most significant ancient defense projects in the whole of northern China, has been studied increasingly not only because it provides important information about the diplomatic and military strategies of the Han Empire (206 B.C.–220 A.D.), but also because it is considered to be a cultural and national symbol of modern China as well as a valuable archaeological monument. Thus, it is crucial to obtain the spatial pattern and preservation situation of the GH for next-step archaeological analysis and conservation management. Nowadays, remote sensing specialists and archaeologists have given priority to manual visualization and a (semi-) automatic extraction approach is lacking. Based on the very high-resolution (VHR) satellite remote sensing imagery, this paper aims to identify automatically the archaeological features of the GH located in ancient Dunhuang, northwest China. Gaofen-1 (GF-1) data were first processed and enhanced after image correction and mathematical morphology, and the M-statistic was then used to analyze the spectral characteristics of GF-1 multispectral (MS) data. In addition, based on GF-1 panchromatic (PAN) data, an auto-identification method that integrates an improved Otsu segmentation algorithm with a Linear Hough Transform (LHT) is proposed. Finally, by making a comparison with visual extraction results, the proposed method was assessed qualitatively and semi-quantitatively to have an accuracy of 80% for the homogenous background in Dunhuang. These automatic identification results could be used to map and evaluate the preservation state of the GH in Dunhuang. Also, the proposed automatic approach was applied to identify similar linear traces of other generations of the Great Wall of China (Western Xia Dynasty (581 A.D.–618 A.D.) and Ming Dynasty (1368 A.D.–1644 A.D.)) in various geographic regions. Moreover, the results indicate that the computer-based automatic identification has great potential in archaeological research, and the proposed method can be generalized and applied to monitor and evaluate the state of preservation of the Great Wall of China in the future. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>General locations of the Great Wall of Han Dynasty (GH) (<b>a</b>) and the topography of Hexi Corridor (<b>b</b>) in China; the dotted lines represent different sections of the GH in Hexi Corridor.</p>
Full article ">Figure 2
<p>The locations of the Dunhuang Section of the GH and ancient Dunhuang in Stein’s [<a href="#B21-remotesensing-11-02711" class="html-bibr">21</a>] (<b>a</b>) and Hedin’s [<a href="#B22-remotesensing-11-02711" class="html-bibr">22</a>] (<b>b</b>) archaeological maps, which can be downloaded from <a href="http://dsr.nii.ac.jp/" target="_blank">http://dsr.nii.ac.jp/</a>. The red and blue circles indicated that the absence of the GH.</p>
Full article ">Figure 3
<p>(<b>a</b>) Map of ancient Dunhuang Prefecture and the Dunhuang Section of GH; the yellow dotted lines represent the general location of GH; the names in white text indicate the approximate locations of the major oasis counties during the Early Han Dynasty. The basemap for (<b>a</b>) was the Enhanced Thematic Mapper Plus (ETM+) imagery, which can be downloaded from <a href="http://www.usgs.gov" target="_blank">http://www.usgs.gov</a>. (<b>b</b>) Photograph of a military fort at the Jade-gate Pass; (<b>c</b>) photograph of a beacon tower at Yangguan Pass.</p>
Full article ">Figure 4
<p>Field photograph showing the natural profile of a typical segment of the GH with remained walls on the ground in Dunhuang.</p>
Full article ">Figure 5
<p>Gaofen-1 (GF-1) multispectral (MS) image (RGB432) (<b>a</b>) and panchromatic (PAN) image (<b>b</b>) of the linear traces of GH in Dunhuang. (<b>c</b>) 2 m GF-1 PMS pansharpened data (RGB432); the white arrows in (<b>b</b>) indicate the linear traces and the red circles in (<b>c</b>) represent the remaining beacon towers (<b>d</b>–<b>g</b>).</p>
Full article ">Figure 6
<p>Spatial coverage of GF-1 PMS data in this study is shown by blue rectangles. The black numbers in the rectangles indicate the general location of 12 selected typical linear traces.</p>
Full article ">Figure 7
<p>Schematic diagram of the Hough Transform used for line detection.</p>
Full article ">Figure 8
<p>Threshold-based segmentation results for the GF-1 PAN sub-image: (<b>a</b>) original image, (<b>b</b>) original image after 2-Mode, (<b>c</b>) original image after Iteration and (<b>d</b>) original image after applying Otsu’s method with geometric filtering. The white arrows in (<b>a</b>) indicate linear traces of GH.</p>
Full article ">Figure 9
<p>(<b>a</b>) The voting results in (<span class="html-italic">ρ</span>, <span class="html-italic">θ</span>)-parameter space obtained using the image shown in <a href="#remotesensing-11-02711-f008" class="html-fig">Figure 8</a>d: the two peak points (white squares) indicate that two potential lines were present in the original image. (<b>b</b>) The LHT-based line detection results: the magenta lines represent the identified linear traces; the yellow and red points represent the start and end points of the extracted lines, respectively.</p>
Full article ">Figure 10
<p>Results of automatic identification of 12 typical linear traces. Magenta lines represent the identified fragments of linear traces, and yellow and red points represent the start and end points of the extracted lines, respectively.</p>
Full article ">Figure 11
<p>Application of the proposed method to Google Earth very high-resolution (VHR) images in order to identify linear traces of the Ming generation of the Great Wall in Heaton Corridor (<b>a</b>), (105°47′42″ E, 38°12′55″ N)) and the Western Xia generation in Inner Mongolia (<b>b</b>), (106°01′34″ E, 41°43′03″ N)).</p>
Full article ">
18 pages, 6031 KiB  
Article
Denoising Algorithm for the FY-4A GIIRS Based on Principal Component Analysis
by Sihui Fan, Wei Han, Zhiqiu Gao, Ruoying Yin and Yu Zheng
Remote Sens. 2019, 11(22), 2710; https://doi.org/10.3390/rs11222710 - 19 Nov 2019
Cited by 9 | Viewed by 5793
Abstract
The Geostationary Interferometric Infrared Sounder (GIIRS) is the first high-spectral resolution advanced infrared (IR) sounder onboard the new-generation Chinese geostationary meteorological satellite FengYun-4A (FY-4A). The GIIRS has 1650 channels, and its spectrum ranges from 700 to 2250 cm?1 with an unapodized spectral [...] Read more.
The Geostationary Interferometric Infrared Sounder (GIIRS) is the first high-spectral resolution advanced infrared (IR) sounder onboard the new-generation Chinese geostationary meteorological satellite FengYun-4A (FY-4A). The GIIRS has 1650 channels, and its spectrum ranges from 700 to 2250 cm?1 with an unapodized spectral resolution of 0.625 cm?1. It represents a significant breakthrough for measurements with high temporal, spatial and spectral resolutions worldwide. Many GIIRS channels have quite similar spectral signal characteristics that are highly correlated with each other in content and have a high degree of information redundancy. Therefore, this paper applies a principal component analysis (PCA)-based denoising algorithm (PDA) to study simulation data with different noise levels and observation data to reduce noise. The results show that the channel reconstruction using inter-channel spatial dependency and spectral similarity can reduce the noise in the observation brightness temperature (BT). A comparison of the BT observed by the GIIRS (O) with the BT simulated by the radiative transfer model (B) shows that a deviation occurs in the observation channel depending on the observation array. The results show that the array features of the reconstructed observation BT (rrO) depending on the observation array are weakened and the effect of the array position on the observations in the sub-center of the field of regard (FOR) are partially eliminated after the PDA procedure is applied. The high observation and simulation differences (O-B) in the sub-center of the FOR array notably reduced after the PDA procedure is implemented. The improvement of the high O-B is more distinct, and the low O-B becomes smoother. In each scan line, the standard deviation of the reconstructed background departures (rrO-B) is lower than that of the background departures (O-B). The observation error calculated by posterior estimation based on variational assimilation also verifies the efficiency of the PDA. The typhoon experiment also shows that among the 29 selected assimilation channels, the observation error of 65% of the channels was reduced as calculated by the triangle method. Full article
(This article belongs to the Special Issue Feature Papers for Section Atmosphere Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Simulated BT of the 128 original channels; and (<b>b</b>) simulated BT of the 91 uncontaminated channels.</p>
Full article ">Figure 2
<p>Relationship between the noise level and the optimum number of PCs of the 91 uncontaminated channels with a values of 0.1, 0.5, 1.0, 1.5, 2.0, and 3.0.</p>
Full article ">Figure 3
<p>Scatterplot of the observed BT against the simulated BT.</p>
Full article ">Figure 4
<p>Relationship between the noise level and optimum number of PCs of the observed BT; the location of the minimum noise level is marked by a red point.</p>
Full article ">Figure 5
<p>Temperature Jacobians of GIIRS longwave infrared channels. The black dotted line indicates channel 78.</p>
Full article ">Figure 6
<p>(<b>a</b>) Observed, simulated and reconstructed BT values at all observation points of channel 78; and (<b>b</b>) background departures (O-B), reconstructed background departures (rrO-B), and reconstructed observation departures (rrO-O) at all observation points of channel 78.</p>
Full article ">Figure 7
<p>Array distribution characteristics of the (<b>a</b>) background departures (O-B); (<b>b</b>) reconstructed background departures (rrO-B) and (<b>c</b>) reconstructed observation departures (rrO-O) in the clear-sky sea area of channel 78.</p>
Full article ">Figure 8
<p>PDF plot of the background departures (O-B) before and after reconstruction.</p>
Full article ">Figure 9
<p>(<b>a</b>) Half-month average array distribution characteristics of the background departures (O-B) of channel 78; and (<b>b</b>) half-month average array distribution of the reconstructed background departures (rrO-B) of channel 78. The size of the dot represents the standard deviation of the O-B or rrO-B, and the color represents O-B or rrO-B bias.</p>
Full article ">Figure 10
<p>(<b>a</b>) Background departures (O-B); (<b>b</b>) reconstructed background departures (rrO-B); (<b>c</b>) standard deviations of the background departures (O-B); and (<b>d</b>) standard deviations of the reconstructed background departures (rrO-B) of channel 78 changes with the number of detection pixels.</p>
Full article ">Figure 11
<p>Background departures (O-B) and reconstructed background departures (rrO-B) of channel 78 in the ranges of (<b>a</b>) 1st to 32nd FOVs, (<b>c</b>) 33rd to 64th FOVs, (<b>e</b>) 65th to 96th FOVs, and (<b>g</b>) 97th to 128th FOVs; standard deviations of the background departures (O-B) and the reconstructed background departures (rrO-B) of channel 78 in the ranges of (<b>b</b>) 1st to 32nd FOVs, (<b>d</b>) 33rd to 64th FOVs, (<b>f</b>) 65th to 96th FOVs, and (<b>h</b>) 97th to 128th FOVs, respectively.</p>
Full article ">Figure 12
<p>Spatial distribution characteristics of the (<b>a</b>) background departures (O-B) and (<b>b</b>) reconstructed background departures (rrO-B) of channel 78 during typhoon Mangkhut.</p>
Full article ">Figure 13
<p>Scatterplot of (<b>a</b>) the observed BT against the simulated BT and (<b>b</b>) the reconstructed BT against the simulated BT during typhoon Mangkhut.</p>
Full article ">Figure 14
<p>(<b>a</b>) Average array distribution characteristics of the background departures (O-B) of channel 78 during typhoon Mangkhut; and (<b>b</b>) average array distribution characteristics of the reconstructed background departures (rrO-B) of channel 78 during typhoon Mangkhut.</p>
Full article ">
17 pages, 3245 KiB  
Article
Evaluation of Satellite-Based Rainfall Estimates in the Lower Mekong River Basin (Southeast Asia)
by Chelsea Dandridge, Venkat Lakshmi, John Bolten and Raghavan Srinivasan
Remote Sens. 2019, 11(22), 2709; https://doi.org/10.3390/rs11222709 - 19 Nov 2019
Cited by 30 | Viewed by 5116
Abstract
Satellite-based precipitation is an essential tool for regional water resource applications that requires frequent observations of meteorological forcing, particularly in areas that have sparse rain gauge networks. To fully realize the utility of remotely sensed precipitation products in watershed modeling and decision-making, a [...] Read more.
Satellite-based precipitation is an essential tool for regional water resource applications that requires frequent observations of meteorological forcing, particularly in areas that have sparse rain gauge networks. To fully realize the utility of remotely sensed precipitation products in watershed modeling and decision-making, a thorough evaluation of the accuracy of satellite-based rainfall and regional gauge network estimates is needed. In this study, Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA) 3B42 v.7 and Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS) daily rainfall estimates were compared with daily rain gauge observations from 2000 to 2014 in the Lower Mekong River Basin (LMRB) in Southeast Asia. Monthly, seasonal, and annual comparisons were performed, which included the calculations of correlation coefficient, coefficient of determination, bias, root mean square error (RMSE), and mean absolute error (MAE). Our validation test showed TMPA to correctly detect precipitation or no-precipitation 64.9% of all days and CHIRPS 66.8% of all days, compared to daily in-situ rainfall measurements. The accuracy of the satellite-based products varied greatly between the wet and dry seasons. Both TMPA and CHIRPS showed higher correlation with in-situ data during the wet season (June–September) as compared to the dry season (November–January). Additionally, both performed better on a monthly than an annual time-scale when compared to in-situ data. The satellite-based products showed wet biases during months that received higher cumulative precipitation. Based on a spatial correlation analysis, the average r-value of CHIRPS was much higher than TMPA across the basin. CHIRPS correlated better than TMPA at lower elevations and for monthly rainfall accumulation less than 500 mm. While both satellite-based products performed well, as compared to rain gauge measurements, the present research shows that CHIRPS might be better at representing precipitation over the LMRB than TMPA. Full article
(This article belongs to the Special Issue Remote Sensing and Modeling of the Terrestrial Water Cycle)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the Lower Mekong River Basin in Southeast Asia and the locations of the rain gauge stations within the basin.</p>
Full article ">Figure 2
<p>Methodology and workflow for this study. *Dry days are days where both satellite-based estimate and rain gauge measurement give a precipitation below the threshold of 0.06 mm. Dry days are excluded in some analyses but not all.</p>
Full article ">Figure 3
<p>Rainfall Distribution in Lower Mekong River Basin. (<b>a</b>) Average (average over 2000 to 2014) seasonal rainfall accumulation during the dry season (November to February) for Tropical Rainfall Measuring Mission (TRMM) Multi-Satellite Precipitation Analysis (TMPA), Climate Hazards Group InfraRed Precipitation with Station (CHIRPS), and in-situ. (<b>b</b>) Average seasonal rainfall accumulation during the wet season (June to September) for TMPA, CHIRPS, and in-situ. (<b>c</b>) Average annual rainfall accumulation for TMPA, CHIRPS, and in-situ.</p>
Full article ">Figure 4
<p>(<b>a</b>) Boxplot analysis of each month using rain gauge measurements. (<b>b</b>) Boxplot analysis of each month using TMPA estimates. (<b>c</b>) Boxplot analysis of each month using CHIRPS estimates. Each analysis uses data from 2000 to 2014. Red horizontal bars represent the median rainfall amount. The blue boxes represent the data that is within the 25th and 75th percentiles. The black horizontal bars above and below the blue boxes represent the maximum and minimum rainfall amounts, respectively. The red ‘+’ represent outliers in the data set.</p>
Full article ">Figure 5
<p>Time-series comparison of monthly averages from in-situ data and TMPA and CHIRPS satellite-based precipitation estimates from 2000 to 2014.</p>
Full article ">Figure 6
<p>Spatial correlation results of mean r-value for each rain gauge stations against CHIRPS precipitation estimates for monthly rainfall.</p>
Full article ">Figure 7
<p>Spatial correlation results of mean r-value for each rain gauge stations against TMPA precipitation estimates for monthly rainfall.</p>
Full article ">
13 pages, 4052 KiB  
Article
Annual Green Water Resources and Vegetation Resilience Indicators: Definitions, Mutual Relationships, and Future Climate Projections
by Matteo Zampieri, Bruna Grizzetti, Michele Meroni, Enrico Scoccimarro, Anton Vrieling, Gustavo Naumann and Andrea Toreti
Remote Sens. 2019, 11(22), 2708; https://doi.org/10.3390/rs11222708 - 19 Nov 2019
Cited by 14 | Viewed by 6029
Abstract
Satellites offer a privileged view on terrestrial ecosystems and a unique possibility to evaluate their status, their resilience and the reliability of the services they provide. In this study, we introduce two indicators for estimating the resilience of terrestrial ecosystems from the local [...] Read more.
Satellites offer a privileged view on terrestrial ecosystems and a unique possibility to evaluate their status, their resilience and the reliability of the services they provide. In this study, we introduce two indicators for estimating the resilience of terrestrial ecosystems from the local to the global levels. We use the Normalized Differential Vegetation Index (NDVI) time series to estimate annual vegetation primary production resilience. We use annual precipitation time series to estimate annual green water resource resilience. Resilience estimation is achieved through the annual production resilience indicator, originally developed in agricultural science, which is formally derived from the original ecological definition of resilience i.e., the largest stress that the system can absorb without losing its function. Interestingly, we find coherent relationships between annual green water resource resilience and vegetation primary production resilience over a wide range of world biomes, suggesting that green water resource resilience contributes to determining vegetation primary production resilience. Finally, we estimate the changes of green water resource resilience due to climate change using results from the sixth phase of the Coupled Model Inter-comparison Project (CMIP6) and discuss the potential consequences of global warming for ecosystem service reliability. Full article
(This article belongs to the Special Issue Ecosystem Services with Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Analysis of annual mean precipitation observed data for the period 1901–2015: (<b>a</b>) long-term climatology; (<b>b</b>) standard deviation; (<b>c</b>) annual green water resources resilience indicator (<span class="html-italic">R<sub>P</sub></span>); (<b>d</b>) drought hazard computed independently with the Weighted Anomaly of Standardized Precipitation Index (WASP; [<a href="#B28-remotesensing-11-02708" class="html-bibr">28</a>,<a href="#B29-remotesensing-11-02708" class="html-bibr">29</a>]).</p>
Full article ">Figure 2
<p>2D histogram of annual green water resources Resilience indicator (<span class="html-italic">R<sub>P</sub></span>) and return periods of drought computed using observed data since 1901.</p>
Full article ">Figure 3
<p>Analysis of annual mean NDVI remotely sensed data for the period 1982–2015: (<b>a</b>) long-term climatology; (<b>b</b>) standard deviation; (<b>c</b>) annual vegetation primary production resilience indicator (<span class="html-italic">R<sub>V</sub></span>); (<b>d</b>) spatial distribution of world biomes according to the Köppen–Geiger (KG) classification.</p>
Full article ">Figure 4
<p>Ratios of vegetation primary production and green water resource resilience indicators (<span class="html-italic">R<sub>P</sub></span> and <span class="html-italic">R<sub>V</sub></span>, respectively) vs. <span class="html-italic">R<sub>P</sub></span> averaged over the homogeneous KG climate zones displayed in <a href="#remotesensing-11-02708-f003" class="html-fig">Figure 3</a>d.</p>
Full article ">Figure 5
<p>Analysis of annual mean precipitation data simulated by 6 climate models for the historical period 1851–2014 and for the future scenario SSP585 over the period 2015–2100: (<b>a</b>) Percent mean precipitation change; (<b>b</b>) change in standard deviation; (<b>c</b>) change of annual green water resources Resilience indicator (<span class="html-italic">R<sub>P</sub></span>); (<b>d</b>) number of models projecting reduction of Green Water Resources (GWR) resilience.</p>
Full article ">
19 pages, 25567 KiB  
Article
Analysis of Parameters for the Accurate and Fast Estimation of Tree Diameter at Breast Height Based on Simulated Point Cloud
by Pei Wang, Xiaozheng Gan, Qing Zhang, Guochao Bu, Li Li, Xiuxian Xu, Yaxin Li, Zichu Liu and Xiangming Xiao
Remote Sens. 2019, 11(22), 2707; https://doi.org/10.3390/rs11222707 - 19 Nov 2019
Cited by 3 | Viewed by 3001
Abstract
Terrestrial laser scanning (TLS) is a high-potential technology in forest surveys. Estimating the diameters at breast height (DBH) accurately and quickly has been considered a key step in estimating forest structural parameters by using TLS technology. However, the accuracy and speed of DBH [...] Read more.
Terrestrial laser scanning (TLS) is a high-potential technology in forest surveys. Estimating the diameters at breast height (DBH) accurately and quickly has been considered a key step in estimating forest structural parameters by using TLS technology. However, the accuracy and speed of DBH estimation are affected by many factors, which are classified into three groups in this study. We adopt an additive error model and propose a simple and common simulation method to evaluate the impacts of three groups of parameters, which include the range error, angular errors in the vertical and horizontal directions, angular step width, trunk distance, slice thickness, and real DBH. The parameters were evaluated statistically by using many simulated point cloud datasets that were under strict control. Two typical circle fitting methods were used to estimate DBH, and their accuracy and speed were compared. The results showed that the range error and the angular error in horizontal direction played major roles in the accuracy of DBH estimation, the angular step widths had a slight effect in the case of high range accuracy, the distance showed no relationship with the accuracy of the DBH estimation, increasing the scanning angular width was relatively beneficial to the DBH estimation, and the algebraic circle fitting method was relatively fast while performing DBH estimation, as is the geometrical method, in the case of high range accuracy. Possible methods that could help to obtain accurate and fast DBH estimation results were proposed and discussed to optimize the design of forest inventory experiments. Full article
(This article belongs to the Special Issue Virtual Forest)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of the method.</p>
Full article ">Figure 2
<p>The scanning geometry of a trunk slice.</p>
Full article ">Figure 3
<p>Illustration of two simulated slices. (<b>a</b>) Slice point cloud without errors. (<b>b</b>) Slice point cloud with the range and angular errors.</p>
Full article ">Figure 4
<p>Estimated results of the Levenberg–Marquardt (LM) method and the Taubin method when the angular errors in the vertical and horizontal directions were 0 degrees, and the angular step widths in the vertical and horizontal directions were 0.02 degrees. The results are shown when the range error was (<b>a</b>) 0.02 m, (<b>b</b>) 0.05 m, (<b>c</b>) 0.10 m and (<b>d</b>) 0.15 m.</p>
Full article ">Figure 5
<p>Time cost of the LM method and Taubin method when the angular errors in the vertical and horizontal directions were 0 degrees, and the angular step widths in the vertical and horizontal directions were 0.02 degrees. The results are shown when the range error was (<b>a</b>) 0.02 m, (<b>b</b>) 0.05 m, (<b>c</b>) 0.10 m and (<b>d</b>) 0.15 m.</p>
Full article ">Figure 6
<p>Illustration of the impacts of the range error on the relative error of the DBH estimation when the angular errors in the vertical and horizontal directions were 0 degrees, and the range error was set as 0.02 m, 0.05 m, 0.10 m, and 0.15 m in turn. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 7
<p>Illustration of the impacts of the vertical angular error on the relative error of the DBH estimation when the range error was 0 m, the angular error in the horizontal direction was 0 degrees, the angular step widths in the vertical and horizontal directions were 0.10 degrees, and the angular error in the vertical direction was set as 0.02 degrees, 0.05 degrees, 0.10 degrees, and 0.15 degrees in turn. (<b>a</b>) Plot on regular scale. (<b>b</b>) Regional magnification of plot (<b>a</b>).</p>
Full article ">Figure 8
<p>Illustration of the impacts of the horizontal angular error on the relative error of the DBH estimation when the range error was 0 m, the angular error in the vertical direction was 0 degrees, and the angular error in the horizontal direction was set as 0.02 degrees, 0.05 degrees, 0.10 degrees, and 0.15 degrees in turn. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 9
<p>Illustration of the impacts of the angular step widths on the relative error of the DBH estimation when the angular errors in the vertical and horizontal directions were 0 degrees, and the angular step widths in the vertical and horizontal directions were set as 0.02 degrees, 0.05 degrees, 0.10 degrees, and 0.15 degrees in turn. The results are shown when the range error was (<b>a</b>) 0.02 m, (<b>b</b>) 0.05 m, (<b>c</b>) 0.08 m and (<b>d</b>) 0.16 m.</p>
Full article ">Figure 10
<p>Illustration of the impacts of distance on the relative error of the DBH estimation when the angular errors in the vertical and horizontal directions were 0 degrees, and the angular step widths in the vertical and horizontal directions were 0.02 degrees. The results are shown when the range error was (<b>a</b>) 0.02 m, (<b>b</b>) 0.05 m, (<b>c</b>) 0.10 m and (<b>d</b>) 0.15 m.</p>
Full article ">Figure 11
<p>Illustration of the impacts of thickness on the relative error of the DBH estimation when the range error was 0.02 m, the angular error in the vertical and horizontal directions were 0 degrees. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 12
<p>Illustration of the impacts of the real DBH on the relative error of the DBH estimation when the range error was 0.02 m, the angular error in the vertical and horizontal directions were 0 degrees. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 12 Cont.
<p>Illustration of the impacts of the real DBH on the relative error of the DBH estimation when the range error was 0.02 m, the angular error in the vertical and horizontal directions were 0 degrees. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 13
<p>Illustration of the relationship between the number of points and the relative error of the DBH estimation when the range error was 0.02 m, the angular error in the vertical and horizontal directions were 0 degrees. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">Figure 14
<p>Illustration of the relationship between the scanning angular width and the relative error of the DBH estimation when the range error was 0.02 m, the angular error in the vertical and horizontal directions were 0 degrees. The results are shown when the angular step widths in the vertical and horizontal directions were (<b>a</b>) 0.02 degrees, (<b>b</b>) 0.05 degrees, (<b>c</b>) 0.10 degrees and (<b>d</b>) 0.15 degrees.</p>
Full article ">
20 pages, 2251 KiB  
Article
Assessment of Portable Chlorophyll Meters for Measuring Crop Leaf Chlorophyll Concentration
by Taifeng Dong, Jiali Shang, Jing M. Chen, Jiangui Liu, Budong Qian, Baoluo Ma, Malcolm J. Morrison, Chao Zhang, Yupeng Liu, Yichao Shi, Hui Pan and Guisheng Zhou
Remote Sens. 2019, 11(22), 2706; https://doi.org/10.3390/rs11222706 - 19 Nov 2019
Cited by 72 | Viewed by 9712
Abstract
Accurate measurement of leaf chlorophyll concentration (LChl) in the field using a portable chlorophyll meter (PCM) is crucial to support methodology development for mapping the spatiotemporal variability of crop nitrogen status using remote sensing. Several PCMs have been developed to measure LChl instantaneously [...] Read more.
Accurate measurement of leaf chlorophyll concentration (LChl) in the field using a portable chlorophyll meter (PCM) is crucial to support methodology development for mapping the spatiotemporal variability of crop nitrogen status using remote sensing. Several PCMs have been developed to measure LChl instantaneously and non-destructively in the field, however, their readings are relative quantities that need to be converted into actual LChl values using conversion functions. The aim of this study was to investigate the relationship between actual LChl and PCM readings obtained by three PCMs: SPAD-502, CCM-200, and Dualex-4. Field experiments were conducted in 2016 on four crops: corn (Zea mays L.), soybean (Glycine max L. Merr.), spring wheat (Triticum aestivum L.), and canola (Brassica napus L.), at the Central Experimental Farm of Agriculture and Agri-Food Canada in Ottawa, Ontario, Canada. To evaluate the impact of other factors (leaf internal structure, leaf pigments other than chlorophyll, and the heterogeneity of LChl distribution) on the conversion function, a global sensitivity analysis was conducted using the PROSPECT-D model to simulate PCM readings under different conditions. Results showed that Dualex-4 had a better performance for actual LChl measurement than SPAD-502 and CCM-200, using a general conversion function for all four crops tested. For SPAD-502 and CCM-200, the error in the readings increases with increasing LChl. The sensitivity analysis reveals that deviations from the calibration functions are more induced by non-uniform LChl distribution than leaf architectures. The readings of Dualex-4 can have a better ability to restrict these influences than those of the other two PCMs. Full article
(This article belongs to the Special Issue Remote Sensing for Precision Nitrogen Management)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the four sampling fields in the Central Experimental Farm, Ottawa, Ontario, Canada; background satellite imagery (9 June 2018) obtained from the Google Earth Pro.</p>
Full article ">Figure 2
<p>Mean value and standard value in four crops for C<sub>ar</sub> (<b>a</b>), LChl (<b>b</b>), Chl<sub>a</sub> (<b>c</b>), Chl<sub>b</sub>, (<b>d</b>) Chl<sub>a</sub>/Chl<sub>b</sub> ratio (<b>e</b>), SPAD-502 (<b>f</b>), Dualex-4-CCI (<b>g</b>) and CCM-200 (<b>h</b>).</p>
Full article ">Figure 3
<p>Scatterplots showing the linear relationship between leaf pigments in four different crops for (<b>a</b>) leaf chlorophyll a (Chl<sub>a</sub>) vs. leaf chlorophyll b (Chl<sub>b</sub>), (<b>b</b>) leaf chlorophyll concentration (Chl<sub>a</sub>+Chl<sub>a</sub>, LChl) vs. leaf carotenoid concentration (C<sub>ar</sub>), (<b>c</b>) Chl<sub>a</sub> vs. C<sub>ar</sub> and (<b>d</b>) Chl<sub>b</sub> vs. C<sub>a.</sub></p>
Full article ">Figure 4
<p>Scatterplots showing linear relationships between leaf carotenoid concentration (Car, µ cm<sup>−2</sup>), leaf chlorophyll concentration (LChl, µ cm<sup>−2</sup>) and meter readings for SPAD-502, Dualex-4 and CCM-200 in four different crops.</p>
Full article ">Figure 5
<p>Relationship between meter readings and standard deviation error of measurements for SPAD-502, CCM-200 and Dualex-4; the meter reading was the average value of 4–5 readings of each leaf sample and the standard deviation value was derived from these readings; the standard deviation for SPAD-502 was only recorded for the canola in the Greenhouse.3.5. Factors Affecting Meter Readings.</p>
Full article ">Figure 6
<p>Uncertainty of the readings (shown by the error bars) due to interference factors other than leaf chlorophyll for the three instruments (<b>a</b>–<b>c</b>) and the relative error as a function of leaf chlorophyll concentration (<b>d</b>). The solid gray lines represent the 1:1 relationship (no impact from other factors). Parameters of the PROSPECT-D simulation are given in <a href="#remotesensing-11-02706-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 7
<p>Influence of non-uniform distribution of leaf chlorophyll concentration on the readings of portable chlorophyll meters: (<b>a</b>) SPAD-502; (<b>b</b>) CCM-200; (<b>c</b>) Dualex-4; the solid gray lines (1:1 line) represent the change of light transmittance under the same amount of leaf chlorophyll concentration with uniform distribution. Parameters of the PROSPECT-D simulation are shown in <a href="#remotesensing-11-02706-t002" class="html-table">Table 2</a>.</p>
Full article ">
10 pages, 3578 KiB  
Letter
Analysis of the Optimal Wavelength for Oceanographic Lidar at the Global Scale Based on the Inherent Optical Properties of Water
by Shuguo Chen, Cheng Xue, Tinglu Zhang, Lianbo Hu, Ge Chen and Junwu Tang
Remote Sens. 2019, 11(22), 2705; https://doi.org/10.3390/rs11222705 - 19 Nov 2019
Cited by 16 | Viewed by 3613
Abstract
Understanding the optimal wavelength for detecting the water column profile from a light detection and ranging (lidar) system is important in the design of oceanographic lidar systems. In this research, the optimal wavelength for detecting the water column profile using a lidar system [...] Read more.
Understanding the optimal wavelength for detecting the water column profile from a light detection and ranging (lidar) system is important in the design of oceanographic lidar systems. In this research, the optimal wavelength for detecting the water column profile using a lidar system at the global scale was analyzed based on the inherent optical properties of water. In addition, assuming that the lidar system had a premium detection characteristic in its hardware design, the maximum detectable depth using the established optimal wavelength was analyzed and compared with the mixed layer depth measured by Argo data at the global scale. The conclusions drawn are as follows: first, the optimal wavelengths for the lidar system are between the blue and green bands. For the open ocean, the optimal wavelengths are between 420 and 510 nm, and for coastal waters, the optimal wavelengths are between 520 and 580 nm. To obtain the best detection ability using a lidar system, the best configuration is to use a lidar system with multiple bands. In addition, a 490 nm wavelength is recommended when an oceanographic lidar system is used at the global scale with a single wavelength. Second, for the recommended 490 nm band, a lidar system with the 4 attenuating length detection ability can penetrate the mixed layer for 80% of global waters. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Definition of the optimal wavelength.</p>
Full article ">Figure 2
<p>Flowchart for retrieving spectral <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">K</mi> <mi mathvariant="normal">d</mi> </msub> </mrow> </semantics></math> using MODerate Imaging Spectroradiometer (MODIS) data.</p>
Full article ">Figure 3
<p>Spatial distribution of the optimal wavelengths at the global scale.</p>
Full article ">Figure 4
<p>Frequency distribution of the wavelength corresponding to a minimum value of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">K</mi> <mi mathvariant="normal">d</mi> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>Distribution of the maximum detectable depth for oceanographic lidar (<b>a</b>), the upper mixed layer depth (MLD) (<b>b</b>), the difference between the maximum detection depth and the MLD (<b>c</b>), and the frequency distribution of the difference between the maximum detection depth and the MLD (<b>d</b>).</p>
Full article ">Figure 6
<p>Global distribution of the maximum detection depth difference (<b>a</b>,<b>b</b>) and the frequency and cumulative frequency distributions of the differences (<b>c</b>,<b>d</b>) following the assumption that the optimal detection for lidar is 4 AL between 455 and 532 nm (<b>a</b>,<b>c</b>) and 490 and 532 nm (<b>b</b>,<b>d</b>), respectively.</p>
Full article ">Figure 7
<p>Comparison of detection performance at different detection abilities of a lidar system under 3 AL (<b>a</b>) and 2 AL (<b>b</b>).</p>
Full article ">
25 pages, 7958 KiB  
Article
Victim Localization in USAR Scenario Exploiting Multi-Layer Mapping Structure
by Abdulrahman Goian, Reem Ashour, Ubaid Ahmad, Tarek Taha, Nawaf Almoosa and Lakmal Seneviratne
Remote Sens. 2019, 11(22), 2704; https://doi.org/10.3390/rs11222704 - 19 Nov 2019
Cited by 12 | Viewed by 3699
Abstract
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped [...] Read more.
Urban search and rescue missions require rapid intervention to locate victims and survivors in the affected environments. To facilitate this activity, Unmanned Aerial Vehicles (UAVs) have been recently used to explore the environment and locate possible victims. In this paper, a UAV equipped with multiple complementary sensors is used to detect the presence of a human in an unknown environment. A novel human localization approach in unknown environments is proposed that merges information gathered from deep-learning-based human detection, wireless signal mapping, and thermal signature mapping to build an accurate global human location map. A next-best-view (NBV) approach with a proposed multi-objective utility function is used to iteratively evaluate the map to locate the presence of humans rapidly. Results demonstrate that the proposed strategy outperforms other methods in several performance measures such as the number of iterations, entropy reduction, and traveled distance. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Shows the top-view of the scene containing people and an object scanned by an RGB-D camera. (<b>b</b>) Shows the obtained occupancy grid of the scene wherein the higher the probability, the darker the cell is, and vice versa.</p>
Full article ">Figure 2
<p>Flowchart of the System Model.</p>
Full article ">Figure 3
<p>System Model of SSD detector. Taken from [<a href="#B21-remotesensing-11-02704" class="html-bibr">21</a>].</p>
Full article ">Figure 4
<p>Demonstration of thermal detection using the Optris PI200.</p>
Full article ">Figure 5
<p>Thermal map update for three different robot poses.</p>
Full article ">Figure 6
<p>Trilateration</p>
Full article ">Figure 7
<p>Wireless map update for three different robot poses.</p>
Full article ">Figure 8
<p>Merged map obtained from three maps of different resolutions where a single cell update is highlighted.</p>
Full article ">Figure 9
<p>Impact of the <math display="inline"><semantics> <mi>β</mi> </semantics></math> parameter on the selectivity of <math display="inline"><semantics> <msub> <mi>U</mi> <mrow> <mi>v</mi> <mi>i</mi> <mi>c</mi> <mi>t</mi> <mi>i</mi> <mi>m</mi> </mrow> </msub> </semantics></math> function where <math display="inline"><semantics> <mrow> <msub> <mi>P</mi> <mrow> <mi>M</mi> <mi>A</mi> <mi>X</mi> </mrow> </msub> <mo>=</mo> <mn>0.9</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 10
<p>Floating sensor model.</p>
Full article ">Figure 11
<p>Simulated environment.</p>
Full article ">Figure 12
<p>Regular Grid Sampling Approach Results. (<b>a</b>) Entropy vs. Iteration using Regular Grid Sampling Approach. The fastest method is indicated by the magenta vertical dash line. (<b>b</b>) Distance vs. Iteration using Regular Grid Sampling Approach. The fastest method is indicated by the magenta vertical dash line.</p>
Full article ">Figure 13
<p>Vehicle traversed path in the <math display="inline"><semantics> <msub> <mi mathvariant="script">U</mi> <mn>3</mn> </msub> </semantics></math>-merged test using RGSA where (<b>a</b>) start location, (<b>b</b>) direct-path trajectory, (<b>c</b>) location of the detected local minimum where A* Planner was invoked to generate path, (<b>d</b>) end location.</p>
Full article ">Figure 14
<p>Regular Adaptive Grid Sampling Approach Results. (<b>a</b>) Entropy vs. Iteration using Adaptive Grid Sampling Approach. The fastest method is indicated by the magenta vertical dash line. (<b>b</b>) Travel distance vs. Iteration using Adaptive Grid Sampling Approach. The fastest method is indicated by the magenta vertical dash line.</p>
Full article ">Figure 15
<p>Generator states vs. iterations using Adaptive Grid Sampling Approach.</p>
Full article ">Figure 16
<p>Vehicle traversed path in the <math display="inline"><semantics> <msub> <mi mathvariant="script">U</mi> <mn>3</mn> </msub> </semantics></math>-merged test using AGSA where (<b>a</b>) start location, (<b>b</b>) direct-path trajectory, (<b>c</b>) location of the detected local minimum where A* Planner was invoked to generate path, (<b>d</b>) end location.</p>
Full article ">
18 pages, 3219 KiB  
Article
Multi-Scale Association between Vegetation Growth and Climate in India: A Wavelet Analysis Approach
by Dawn Emil Sebastian, Sangram Ganguly, Jagdish Krishnaswamy, Kate Duffy, Ramakrishna Nemani and Subimal Ghosh
Remote Sens. 2019, 11(22), 2703; https://doi.org/10.3390/rs11222703 - 18 Nov 2019
Cited by 23 | Viewed by 6177
Abstract
Monsoon climate over India has high degree of spatio-temporal heterogeneity characterized by the existence of multi-climatic zones along with strong intra-seasonal, seasonal, and inter-annual variability. Vegetation growth of Indian forests relates to this climate variability, though the dependence structure over space and time [...] Read more.
Monsoon climate over India has high degree of spatio-temporal heterogeneity characterized by the existence of multi-climatic zones along with strong intra-seasonal, seasonal, and inter-annual variability. Vegetation growth of Indian forests relates to this climate variability, though the dependence structure over space and time is yet to be explored. Here, we present a comprehensive analysis of this association with quality-controlled satellite-based remote sensing dataset of vegetation greenness and radiation along with station based gridded precipitation datasets. A spatio-temporal time-frequency analysis using wavelets is performed to understand the relative association of vegetation growth with precipitation and radiation at different time scales. The inter-annual variation of forest greenness over the Tropical India are observed to be correlated with the seasonal monsoon precipitation. However, at inter and intra-seasonal scales, vegetation has a strong association with radiation in regions of high precipitation like the Western Ghats, Eastern Himalayas, and Northeast hills. Forests in Western Himalayas were found to be correlated more on the winter precipitation from western disturbances than the south west monsoon precipitation. Our results provide new and useful region-specific information for dynamic vegetation modelling in the Indian monsoon region that may further be used in understanding global vegetation-land-atmosphere interactions. Full article
(This article belongs to the Special Issue Remote Sensing of Tropical Phenology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) The forested pixels, extracted based on the LULC map of Roy et al. (2015), considering deciduous broad-leaf forests, deciduous needle-leaf forests, evergreen broad-leaf forests, evergreen needle-leaf forests, mixed forests, and mangrove forests. (<b>b</b>) Classification of the forest pixels in India into five different forest regions based on the biogeographic zone classification by Rodgers et al. (2002).</p>
Full article ">Figure 2
<p>(<b>a</b>) Inter-annual variation of standard anomalies of enhanced vegetation index (EVI), precipitation, and radiation for the entire Indian landmass. (<b>b</b>) Inter-annual variation of standard anomalies of EVI, precipitation, and radiation considering only the forested pixels in India.</p>
Full article ">Figure 3
<p>Variation of EVI from the mean with precipitation and radiation anomalies (composites of positive and negative anomaly years), respectively for (<b>a</b>,<b>b</b>) central forests, (<b>c</b>,<b>d</b>) Eastern Himalayas, (<b>e</b>,<b>f</b>) northeast hills, (<b>g</b>,<b>h</b>) Western Ghats, and (<b>i</b>,<b>j</b>) Western Himalayas during the monsoon months (JJAS). The years with positive and negative anomalies in precipitation and radiation are considered separately to plot the composite EVI over such years with the variation among them represented by the colored band.</p>
Full article ">Figure 4
<p>The mean change in EVI between monsoon months (JJAS) and pre-monsoon months (MAM) denoted as del<sub>EVI</sub>.</p>
Full article ">Figure 5
<p>Spatial mean of multi-year averages of EVI (with the mean percentage of high quality MODIS EVI pixels at each step) (green curve), precipitation (blue curve), and radiation (yellow curve) values of the forest pixels for (<b>a</b>) central forests, (<b>b</b>) Eastern Himalayas, (<b>c</b>) northeast hills, (<b>d</b>) Western Ghats, and (<b>e</b>) Western Himalayas.</p>
Full article ">Figure 6
<p>Wavelet analysis: Power spectra of wavelet coherence between EVI and precipitation and the average coherence values for the study duration at different periodicities, respectively, for (<b>a</b>,<b>b</b>) central forests, (<b>c</b>,<b>d</b>) Eastern Himalayas, (<b>e</b>,<b>f</b>) Northeast hills, (<b>g</b>,<b>h</b>) Western Ghats, and (<b>i</b>,<b>j</b>) Western Himalayas.</p>
Full article ">Figure 7
<p>Wavelet analysis: Power spectra of wavelet coherence between EVI and radiation and the average coherence values for the study duration at different periodicities, respectively, for (<b>a</b>,<b>b</b>) central forests, (<b>c</b>,<b>d</b>) Eastern Himalayas, (<b>e</b>,<b>f</b>) Northeast hills, (<b>g</b>,<b>h</b>) Western Ghats, and (<b>i</b>,<b>j</b>) Western Himalayas.</p>
Full article ">
18 pages, 8069 KiB  
Letter
The Radiative Transfer Characteristics of the O2 Infrared Atmospheric Band in Limb-Viewing Geometry
by Weiwei He, Kuijun Wu, Yutao Feng, Di Fu, Zhenwei Chen and Faquan Li
Remote Sens. 2019, 11(22), 2702; https://doi.org/10.3390/rs11222702 - 18 Nov 2019
Cited by 14 | Viewed by 3861
Abstract
The O2(a1Δg) emission near 1.27 μm provides an important means to remotely sense the thermal characteristics, dynamical features, and compositional structures of the upper atmosphere because of its photochemistry and spectroscopic properties. In this work, an emission–absorption [...] Read more.
The O2(a1Δg) emission near 1.27 μm provides an important means to remotely sense the thermal characteristics, dynamical features, and compositional structures of the upper atmosphere because of its photochemistry and spectroscopic properties. In this work, an emission–absorption transfer model for limb measurements was developed to calculate the radiation and scattering spectral brightness by means of a line-by-line approach. The nonlocal thermal equilibrium (non-LTE) model was taken into account for accurate calculation of the O2(a1Δg) emission by incorporating the latest rate constants and spectral parameters. The spherical adding and doubling methods were used in the multiple scattering model. Representative emission and absorption line shapes of the O 2 ( a 1 Δ g , υ = 0 ) O 2 ( X Σ g 3 , υ = 0 ) band and their spectral behavior varying with altitude were examined. The effects of solar zenith angle, surface albedo, and aerosol loading on the line shapes were also studied. This paper emphasizes the advantage of using infrared atmospheric band for remote sensing of the atmosphere from 20 up to 120 km, a significant region where the strongest coupling between the lower and upper atmosphere occurs. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The absorption line strength and spectral distribution of the O<sub>2</sub> infrared atmospheric band and the O<sub>2</sub> atmospheric band (from HITRAN [<a href="#B13-remotesensing-11-02702" class="html-bibr">13</a>]).</p>
Full article ">Figure 2
<p>Dayglow production mechanism of the O<sub>2</sub> infrared atmospheric band [<a href="#B20-remotesensing-11-02702" class="html-bibr">20</a>].</p>
Full article ">Figure 3
<p>The modeled O<sub>2</sub>(a<sup>1</sup>Δ<sub>g</sub>) concentration profile and respective contributions of each production mechanism.</p>
Full article ">Figure 4
<p>The calculated relative contribution of O<sub>2</sub>(a<sup>1</sup>Δ<sub>g</sub>) molecules at <math display="inline"><semantics> <mrow> <msup> <mi>ν</mi> <mo>′</mo> </msup> <mrow> <mo>=</mo> <mn>0</mn> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msup> <mi>ν</mi> <mo>′</mo> </msup> <mrow> <mo>=</mo> <mn>1</mn> </mrow> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msup> <mi>ν</mi> <mo>′</mo> </msup> <mrow> <mo>=</mo> <mn>2</mn> </mrow> </mrow> </semantics></math> as a function of altitude.</p>
Full article ">Figure 5
<p>The calculated volume emission rate (VER) profiles of the O<sub>2</sub>(a<sup>1</sup>Δ<sub>g</sub>) dayglow in the four seasons.</p>
Full article ">Figure 6
<p>(<b>a</b>) The emission and absorption spectrum as a function of the transition wavelength. (<b>b</b>) The ratio of emission to absorption line strength for each line.</p>
Full article ">Figure 7
<p>The construction of the path model in limb-viewing geometry.</p>
Full article ">Figure 8
<p>The calculated limb spectral brightness and the spectral radiance of the target layer at different tangent heights.</p>
Full article ">Figure 9
<p>The spectral brightness of line R<sub>9</sub>Q<sub>10</sub> with and without self-absorption as well as its atmospheric transmission at 50 km tangent height.</p>
Full article ">Figure 10
<p>The integral radiance (<b>a</b>) and limb-view weight (<b>b</b>) of the R<sub>9</sub>Q<sub>10</sub> line varying with the tangent altitude.</p>
Full article ">Figure 11
<p>The limb spectral radiance of the scattered sunlight at 1.27 μm and the absorbance of the target layer at different tangent heights.</p>
Full article ">Figure 12
<p>Variation in the absorption line shape with tangent heights. The line shape is broad at lower tangent heights of 5 and 10 km and becomes a superposition of a narrow component and a much broader one above 20 km.</p>
Full article ">Figure 13
<p>The line shape of the total spectral brightness as a function of albedos at the tangent height of 15 km.</p>
Full article ">Figure 14
<p>The total spectral brightness of the R<sub>9</sub>Q<sub>10</sub> line varying with tangent heights.</p>
Full article ">Figure 15
<p>The variations in total spectral brightness with the solar zenith angle (SZA) and atmospheric aerosol at the tangent heights from 0 to 100 km.</p>
Full article ">Figure 16
<p>Comparisons between the emission–absorption transfer model and results from MODTRAN4 and the TIMED–SABER measurements. (<b>a</b>) Calculated spectral radiance using our model and MODTRAN4 at tangent height of 20 km. (<b>b</b>) Measured and modeled O<sub>2</sub>(a<sup>1</sup>Δ<sub>g</sub>) band brightness as a function of tangent height.</p>
Full article ">
16 pages, 11071 KiB  
Article
Spatiotemporal Fusion of Satellite Images via Very Deep Convolutional Networks
by Yuhui Zheng, Huihui Song, Le Sun, Zebin Wu and Byeungwoo Jeon
Remote Sens. 2019, 11(22), 2701; https://doi.org/10.3390/rs11222701 - 18 Nov 2019
Cited by 20 | Viewed by 3608
Abstract
Spatiotemporal fusion provides an effective way to fuse two types of remote sensing data featured by complementary spatial and temporal properties (typical representatives are Landsat and MODIS images) to generate fused data with both high spatial and temporal resolutions. This paper presents a [...] Read more.
Spatiotemporal fusion provides an effective way to fuse two types of remote sensing data featured by complementary spatial and temporal properties (typical representatives are Landsat and MODIS images) to generate fused data with both high spatial and temporal resolutions. This paper presents a very deep convolutional neural network (VDCN) based spatiotemporal fusion approach to effectively handle massive remote sensing data in practical applications. Compared with existing shallow learning methods, especially for the sparse representation based ones, the proposed VDCN-based model has the following merits: (1) explicitly correlating the MODIS and Landsat images by learning a non-linear mapping relationship; (2) automatically extracting effective image features; and (3) unifying the feature extraction, non-linear mapping, and image reconstruction into one optimization framework. In the training stage, we train a non-linear mapping between downsampled Landsat and MODIS data using VDCN, and then we train a multi-scale super-resolution (MSSR) VDCN between the original Landsat and downsampled Landsat data. The prediction procedure contains three layers, where each layer consists of a VDCN-based prediction and a fusion model. These layers achieve non-linear mapping from MODIS to downsampled Landsat data, the two-times SR of downsampled Landsat data, and the five-times SR of downsampled Landsat data, successively. Extensive evaluations are executed on two groups of commonly used Landsat–MODIS benchmark datasets. For the fusion results, the quantitative evaluations on all prediction dates and the visual effect on one key date demonstrate that the proposed approach achieves more accurate fusion results than sparse representation based methods. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of our method.</p>
Full article ">Figure 2
<p>The architecture of the non-linear mapping VDCN, where LSR means low spatial resolution (i.e., the downsampled Landsat image).</p>
Full article ">Figure 3
<p>Flowchart of our fusion model. HPM is high pass modulation module; IW is an indicative weighting module.</p>
Full article ">Figure 4
<p>Quantitative results of 12 prediction dates at the CIA site: (<b>a</b>) RMSE; (<b>b</b>) SAM; (<b>c</b>) SSIM; (<b>d</b>) ERGAS.</p>
Full article ">Figure 5
<p>Quantitative results of 9 prediction dates at the LGC site: (<b>a</b>) RMSE; (<b>b</b>) SAM; (<b>c</b>) SSIM; (<b>d</b>) ERGAS.</p>
Full article ">Figure 6
<p>Illustrating image-pairs of CIA. (<b>a</b>–<b>c</b>) are MODIS data on the 7th, 8th, and 9th dates, respectively; (<b>d</b>–<b>f</b>) are the Landsat data on the same dates as MODIS.</p>
Full article ">Figure 7
<p>Illustrating image-pairs of LGC. (<b>a</b>–<b>c</b>) are MODIS data on the 7th, 8th, and 9th dates, respectively, and (<b>d</b>–<b>f</b>) are the Landsat data on the same dates as MODIS.</p>
Full article ">Figure 8
<p>Fusion results on the 8th date of the CIA site. Top row shows the zoomed ground truth in the red rectangle of <a href="#remotesensing-11-02701-f006" class="html-fig">Figure 6</a>e and the corresponding fusion results. Bottom row shows the zoomed ROIs in the red rectangle of images in the top row. (<b>a</b>) the ground truth Landsat image; (<b>b</b>) the fusion result of SRSTF; (<b>c</b>) the fusion result of CNNSTF; (<b>d</b>) the fusion result of VDCNSTF.</p>
Full article ">Figure 9
<p>Fusion results on the 8th date of the LGC site. Top row shows the zoomed ground truth in the red rectangle of <a href="#remotesensing-11-02701-f007" class="html-fig">Figure 7</a>e and the corresponding fusion results. Bottom row shows the zoomed ROIs in the black rectangle of images in the top row. (<b>a</b>) The ground truth Landsat image; (<b>b</b>) the fusion result of SRSTF; (<b>c</b>) the fusion result of CNNSTF; (<b>d</b>) the fusion result of VDCNSTF.</p>
Full article ">
18 pages, 3186 KiB  
Article
Aircraft Target Classification for Conventional Narrow-Band Radar with Multi-Wave Gates Sparse Echo Data
by Wantian Wang, Ziyue Tang, Yichang Chen, Yuanpeng Zhang and Yongjian Sun
Remote Sens. 2019, 11(22), 2700; https://doi.org/10.3390/rs11222700 - 18 Nov 2019
Cited by 8 | Viewed by 3148
Abstract
For a conventional narrow-band radar system, the detectable information of the target is limited, and it is difficult for the radar to accurately identify the target type. In particular, the classification probability will further decrease when part of the echo data is missed. [...] Read more.
For a conventional narrow-band radar system, the detectable information of the target is limited, and it is difficult for the radar to accurately identify the target type. In particular, the classification probability will further decrease when part of the echo data is missed. By extracting the target features in time and frequency domains from multi-wave gates sparse echo data, this paper presents a classification algorithm in conventional narrow-band radar to identify three different types of aircraft target, i.e., helicopter, propeller and jet. Firstly, the classical sparse reconstruction algorithm is utilized to reconstruct the target frequency spectrum with single-wave gate sparse echo data. Then, the micro-Doppler effect caused by rotating parts of different targets is analyzed, and the micro-Doppler based features, such as amplitude deviation coefficient, time domain waveform entropy and frequency domain waveform entropy, are extracted from reconstructed echo data to identify targets. Thirdly, the target features extracted from multi-wave gates reconstructed echo data are weighted and fused to improve the accuracy of classification. Finally, the fused feature vectors are fed into a support vector machine (SVM) model for classification. By contrast with the conventional algorithm of aircraft target classification, the proposed algorithm can effectively process sparse echo data and achieve higher classification probability via weighted features fusion of multi-wave gates echo data. The experiments on synthetic data are carried out to validate the effectiveness of the proposed algorithm. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometry between radar and rotor target: (<b>a</b>) space geometry; (<b>b</b>) 2-D plane geometry.</p>
Full article ">Figure 2
<p>Target frequency domain echo: (<b>a</b>) helicopter; (<b>b</b>) propeller; (<b>c</b>) jet.</p>
Full article ">Figure 3
<p>The definition of multi-wave gates echo data.</p>
Full article ">Figure 4
<p>Description of complete and sparse echo data: (<b>a</b>) complete echo data; (<b>b</b>) sparse echo data.</p>
Full article ">Figure 5
<p>Reconstructed frequency domain echoes of SL0 algorithm: (<b>a</b>) Helicopter; (<b>b</b>) Propeller; (<b>c</b>) Jet.</p>
Full article ">Figure 6
<p>Reconstructed frequency domain echoes of OMP algorithm: (<b>a</b>) helicopter; (<b>b</b>) propeller; (<b>c</b>) jet.</p>
Full article ">Figure 7
<p>Results of extracted features: (<b>a</b>) amplitude deviation coefficient; (<b>b</b>) frequency domain waveform entropy; (<b>c</b>) time domain waveform entropy.</p>
Full article ">Figure 8
<p>Result of fusing the features extracted from four wave gates echo data: (<b>a</b>) amplitude deviation coefficient; (<b>b</b>) frequency domain waveform entropy; (<b>c</b>) time domain waveform entropy.</p>
Full article ">Figure 9
<p>Three-class support vector machine (SVM) model.</p>
Full article ">Figure 10
<p>Flowchart of the proposed classification algorithm.</p>
Full article ">Figure 11
<p>Contrast experiment of reconstruction algorithm.</p>
Full article ">Figure 12
<p>Selection experiment of wave gate number.</p>
Full article ">Figure 13
<p>Classification results of single wave gate echo data for training and four wave gates echo data for testing.</p>
Full article ">Figure 14
<p>Classification results using four wave gates echo data for both training and testing.</p>
Full article ">
20 pages, 6975 KiB  
Article
Assessment of Night-Time Lighting for Global Terrestrial Protected and Wilderness Areas
by Liangxian Fan, Jianjun Zhao, Yeqiao Wang, Zhoupeng Ren, Hongyan Zhang and Xiaoyi Guo
Remote Sens. 2019, 11(22), 2699; https://doi.org/10.3390/rs11222699 - 18 Nov 2019
Cited by 14 | Viewed by 4144
Abstract
Protected areas (PAs) play an important role in biodiversity conservation and ecosystem integrity. However, human development has threatened and affected the function and effectiveness of PAs. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) night-time stable light (NTL) data have proven to be [...] Read more.
Protected areas (PAs) play an important role in biodiversity conservation and ecosystem integrity. However, human development has threatened and affected the function and effectiveness of PAs. The Defense Meteorological Satellite Program/Operational Linescan System (DMSP/OLS) night-time stable light (NTL) data have proven to be an effective indicator of the intensity and change of human-induced urban development over a long time span and at a larger spatial scale. We used the NTL data from 1992 to 2013 to characterize the human-induced urban development and studied the spatial and temporal variation of the NTL of global terrestrial PAs. We selected seven types of PAs defined by the International Union for Conversation of Nature (IUCN), including strict nature reserve (Ia), wilderness area (Ib), national park (II), natural monument or feature (III), habitat/species management area (IV), protected landscape/seascape (V), and protected area with sustainable use of natural resources (VI). We evaluated the NTL digital number (DN) in PAs and their surrounding buffer zones, i.e., 0–1 km, 1–5 km, 5–10 km, 10–25 km, 25–50 km, and 50–100 km. The results revealed the level, growth rate, trend, and distribution pattern of NTL in PAs. Within PAs, areas of types V and Ib had the highest and lowest NTL levels, respectively. In the surrounding 1–100 km buffer zones, type V PAs also had the highest NTL level, but type VI PAs had the lowest NTL level. The NTL level in the areas surrounding PAs was higher than that within PAs. Types Ia and III PAs showed the highest and lowest NTL growth rate from 1992 to 2013, respectively, both inside and outside of PAs. The NTL distributions surrounding the Ib and VI PAs were different from other types. The areas close to Ib and VI boundaries, i.e., in the 0–25 km buffer zones, showed lower NTL levels, for which the highest NTL level was observed within the 25–100 km buffer zone. However, other types of PAs showed the opposite NTL patterns. The NTL level was lower in the distant buffer zones, and the lowest night light was within the 1–25 km buffer zones. Globally, 6.9% of PAs are being affected by NTL. Conditions of wilderness areas, e.g., high latitude regions, Tibetan Plateau, Amazon, and Caribbean, are the least affected by NTL. The PAs in Europe, Asia, and North America are more affected by NTL than South America, Africa, and Oceania. Full article
(This article belongs to the Special Issue Remote Sensing Applications in Monitoring of Protected Areas)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Gross domestic product (GDP; unit: trillions of dollars) and global the sum of NTL (SNTL) (1992–2013) from (<b>a</b>) the raw NTL and (<b>b</b>) the calibrated NTL. Electricity consumption (EC; unit: billion kWh) and global SNTL (1992–2013) from (<b>c</b>) the raw NTL and (<b>d</b>) the calibrated NTL. DMSP/OLS—Defense Meteorological Satellite Program/ Operational Linescan System.</p>
Full article ">Figure 2
<p>The correlation of GDP (<b>a</b>) and EC (<b>b</b>) with the raw NTL and the calibrated NTL. The black line is the 1:1 line.</p>
Full article ">Figure 3
<p>Spatial distribution of NTL in association with global terrestrial protected areas (PAs). The orange-colored areas represent the PAs; the dark blue-colored area (including oceans) were in darkness; the yellow-colored areas had a higher NTL digital number (DN) value than the green-colored areas; the white-colored area had no data.</p>
Full article ">Figure 4
<p>(<b>a</b>) Average NTL DN value for every PA and (<b>b</b>) average NTL DN value for every continent. For (<b>a</b>), we used the natural breaks (Jens) method to divide the PAs into three levels according to the average DN value. The green-colored PAs were those with the average value of 0, the yellow-colored PAs had a lower than average DN value, and the red-colored PAs had a higher than average DN value. For (<b>b</b>), the ranking of the average NTL DN value of PAs in each continent gave the order as Europe &gt; Asia &gt; North America &gt; South America &gt; Africa &gt; Oceania.</p>
Full article ">Figure 5
<p>(<b>a</b>) Average trend of NTL of the global terrestrial PAs and (<b>b</b>) average trend of every continent. For (<b>a</b>), we used the natural breaks (Jens) method to divide the PAs into three levels according to the average trend of NTL. The blue-colored PAs had an average trend of 0, the yellow-colored PAs had a lower than average trend, and the red-colored PAs had a higher than average trend. For (<b>b</b>), the ranking of the average NTL trend of PAs in each continent gave the order as Europe &gt; Asia &gt; North America &gt; South America &gt; Africa &gt; Oceania.</p>
Full article ">Figure 6
<p>NTL level on different buffers for each type of PA. Columns with different colors represent the interior of PAs and different buffer zones.</p>
Full article ">Figure 7
<p>NTL level for each type of PA in buffer zones. Columns with different colors represent different types of PAs. The light blue shadowed areas represents the mean DN values of PAs and buffers.</p>
Full article ">Figure 8
<p>Changes within PAs and in different buffer zones in the NTL level of different types of PAs in the time series. The numbers in the grid represent the average NTL DN values. The color of the grids is from blue to red, and the corresponding values are from small to large. (<b>a</b>) strict nature reserve (Ia), (<b>b</b>) wilderness area (Ib), (<b>c</b>) national park (II), (<b>d</b>) natural monument or feature (III), (<b>e</b>) habitat/species management area (IV), (<b>f</b>) protected landscape/seascape (V), and (<b>g</b>) protected area with sustainable use of natural resources (VI).</p>
Full article ">Figure 8 Cont.
<p>Changes within PAs and in different buffer zones in the NTL level of different types of PAs in the time series. The numbers in the grid represent the average NTL DN values. The color of the grids is from blue to red, and the corresponding values are from small to large. (<b>a</b>) strict nature reserve (Ia), (<b>b</b>) wilderness area (Ib), (<b>c</b>) national park (II), (<b>d</b>) natural monument or feature (III), (<b>e</b>) habitat/species management area (IV), (<b>f</b>) protected landscape/seascape (V), and (<b>g</b>) protected area with sustainable use of natural resources (VI).</p>
Full article ">Figure 8 Cont.
<p>Changes within PAs and in different buffer zones in the NTL level of different types of PAs in the time series. The numbers in the grid represent the average NTL DN values. The color of the grids is from blue to red, and the corresponding values are from small to large. (<b>a</b>) strict nature reserve (Ia), (<b>b</b>) wilderness area (Ib), (<b>c</b>) national park (II), (<b>d</b>) natural monument or feature (III), (<b>e</b>) habitat/species management area (IV), (<b>f</b>) protected landscape/seascape (V), and (<b>g</b>) protected area with sustainable use of natural resources (VI).</p>
Full article ">Figure 9
<p>The NTL level of different types of PAs in their interior and surrounding buffer zones. The blue shadowed area represents the fact that the 1–10 km buffer zone had the highest NTL level.</p>
Full article ">Figure 10
<p>Growth rate of NTL within every type of PA and the buffer zones.</p>
Full article ">Figure 11
<p>Trends within every type of PAs and of buffers.</p>
Full article ">
28 pages, 16322 KiB  
Article
Image Formation of Azimuth Periodically Gapped SAR Raw Data with Complex Deconvolution
by Yulei Qian and Daiyin Zhu
Remote Sens. 2019, 11(22), 2698; https://doi.org/10.3390/rs11222698 - 18 Nov 2019
Cited by 12 | Viewed by 2897
Abstract
The phenomenon of periodical gapping in Synthetic Aperture Radar (SAR), which is induced in various ways, creates challenges in focusing raw SAR data. To handle this problem, a novel method is proposed in this paper. Complex deconvolution is utilized to restore the azimuth [...] Read more.
The phenomenon of periodical gapping in Synthetic Aperture Radar (SAR), which is induced in various ways, creates challenges in focusing raw SAR data. To handle this problem, a novel method is proposed in this paper. Complex deconvolution is utilized to restore the azimuth spectrum of complete data from the gapped raw data in the proposed method. In other words, a new approach is provided by the proposed method to cope with periodically gapped raw SAR data via complex deconvolution. The proposed method provides a robust implementation of deconvolution for processing azimuth gapped raw data. The proposed method mainly consists of phase compensation and recovering the azimuth spectrum of raw data with complex deconvolution. The gapped data become sparser in the range of the Doppler domain after phase compensation. Then, it is feasible to recover the azimuth spectrum of the complete data from gapped raw data via complex deconvolution in the Doppler domain. Afterwards, the traditional SAR imaging algorithm is capable of focusing the reconstructed raw data in this paper. The effectiveness of the proposed method was validated via point target simulation and surface target simulation. Moreover, real SAR data were utilized to further demonstrate the validity of the proposed method. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sampling scheme of periodically gapped sampling in the azimuth direction.</p>
Full article ">Figure 2
<p>Sampling scheme of uniform sampling in the azimuth direction.</p>
Full article ">Figure 3
<p>Coarse compression of point targets: (<b>a</b>) distribution of nine points; (<b>b</b>) original echo data in the time domain, <b><span class="html-italic">s</span></b>(<span class="html-italic">τ,η</span>); (<b>c</b>) original echo data in the range of the Doppler domain, <b><span class="html-italic">s</span></b>(<span class="html-italic">τ,f<sub>η</sub></span>); and (<b>d</b>) data in the range of the Doppler domain after phase compensation, <b><span class="html-italic">S<sub>c</sub></span></b>(<span class="html-italic">τ,f<sub>η</sub></span>).</p>
Full article ">Figure 4
<p>The coarse compression of real SAR data: (<b>a</b>) original echo data in the time domain, <b><span class="html-italic">s</span></b>(<span class="html-italic">τ,η</span>); (<b>b</b>) original echo data in the range of the Doppler domain, <b><span class="html-italic">s</span></b>(<span class="html-italic">τ,f<sub>η</sub></span>); and (<b>c</b>) data in the range of the Doppler domain after phase compensation, <b><span class="html-italic">S<sub>c</sub></span></b>(<span class="html-italic">τ,f<sub>η</sub></span>).</p>
Full article ">Figure 5
<p>Sampling of the SAR azimuth periodically gapped data.</p>
Full article ">Figure 6
<p>Demonstration of filling gaps with zeros.</p>
Full article ">Figure 7
<p>Demonstration of rectangular pulse sequence.</p>
Full article ">Figure 8
<p>Procedure of restoration via complex deconvolution.</p>
Full article ">Figure 9
<p>Flowchart of the proposed method.</p>
Full article ">Figure 10
<p>Distribution of point targets in simulation.</p>
Full article ">Figure 11
<p>Focused point target results of T<sub>1</sub>, T<sub>2</sub>, and T<sub>3</sub>: (<b>a</b>) focused result of T<sub>1</sub>; (<b>b</b>) focused result of T<sub>2</sub>; and (<b>c</b>) focused result of T<sub>3</sub>.</p>
Full article ">Figure 11 Cont.
<p>Focused point target results of T<sub>1</sub>, T<sub>2</sub>, and T<sub>3</sub>: (<b>a</b>) focused result of T<sub>1</sub>; (<b>b</b>) focused result of T<sub>2</sub>; and (<b>c</b>) focused result of T<sub>3</sub>.</p>
Full article ">Figure 12
<p>Focused point target results of T<sub>4</sub>, T<sub>5</sub>, and T<sub>6</sub>: (<b>a</b>) focused result of T<sub>4</sub>; (<b>b</b>) focused result of T<sub>5</sub>; and (<b>c</b>) focused result of T<sub>6</sub>.</p>
Full article ">Figure 13
<p>Focused point target results of T<sub>7</sub>, T<sub>8</sub>, and T<sub>9</sub>: (<b>a</b>) focused result of T<sub>7</sub>; (<b>b</b>) focused result of T<sub>8</sub>; and (<b>c</b>) focused result of T<sub>9</sub>.</p>
Full article ">Figure 13 Cont.
<p>Focused point target results of T<sub>7</sub>, T<sub>8</sub>, and T<sub>9</sub>: (<b>a</b>) focused result of T<sub>7</sub>; (<b>b</b>) focused result of T<sub>8</sub>; and (<b>c</b>) focused result of T<sub>9</sub>.</p>
Full article ">Figure 14
<p>1D Azimuth profiles of T<sub>1</sub>, T<sub>2</sub>, and T<sub>3</sub>: (<b>a</b>) 1D azimuth profile obtained by the adding zeros operation; and (<b>b</b>) 1D azimuth profile obtained by the proposed method.</p>
Full article ">Figure 15
<p>1D Azimuth profiles of T<sub>4</sub>, T<sub>5</sub>, and T<sub>6</sub>: (<b>a</b>) 1D azimuth profile obtained by the adding zeros operation; and (<b>b</b>) 1D azimuth profile obtained by the proposed method.</p>
Full article ">Figure 16
<p>1D Azimuth profiles of T<sub>7</sub>, T<sub>8</sub>, and T<sub>9</sub>: (<b>a</b>) 1D azimuth profile obtained by the adding zeros operation; and (<b>b</b>) 1D azimuth profile obtained by the proposed method.</p>
Full article ">Figure 17
<p>Imaged results of surface target simulation: (<b>a</b>) result obtained by the adding zeros operation; and (<b>b</b>) result obtained by the proposed method.</p>
Full article ">Figure 18
<p>1D profiles of results at -150 m in range direction: (<b>a</b>) 1D profile of the result obtained by the adding zeros operation; and (<b>b</b>) 1D profile of the result obtained by the proposed method.</p>
Full article ">Figure 19
<p>1D profiles of results at 0 m in range direction: (<b>a</b>) 1D profile of the result obtained by the adding zeros operation; and (<b>b</b>) 1D profile of the result obtained by the proposed method.</p>
Full article ">Figure 20
<p>1D profiles of results at 150 m in range direction: (<b>a</b>) 1D profile of the result obtained by the adding zeros operation; and (<b>b</b>) 1D profile of the result obtained by the proposed method.</p>
Full article ">Figure 21
<p>Results of real SAR data: (<b>a</b>) Result acquired from the complete data; (<b>b</b>) result acquired by the adding zeros operation from gapped data; (<b>c</b>) result acquired with PG-APES from gapped data; (<b>d</b>) result acquired with the Burg algorithm from gapped data; (<b>e</b>) result acquired with the method in [<a href="#B23-remotesensing-11-02698" class="html-bibr">23</a>] from gapped data; and (<b>f</b>) result acquired with the proposed method from gapped data.</p>
Full article ">Figure 22
<p>Chosen regions in real SAR data.</p>
Full article ">Figure 23
<p>Results of Region 1: (<b>a</b>) complete data; (<b>b</b>) adding zeros operation; (<b>c</b>) PG-APES; (<b>d</b>) Burg algorithm; (<b>e</b>) the method in [<a href="#B23-remotesensing-11-02698" class="html-bibr">23</a>]; and (<b>f</b>) the proposed method.</p>
Full article ">Figure 23 Cont.
<p>Results of Region 1: (<b>a</b>) complete data; (<b>b</b>) adding zeros operation; (<b>c</b>) PG-APES; (<b>d</b>) Burg algorithm; (<b>e</b>) the method in [<a href="#B23-remotesensing-11-02698" class="html-bibr">23</a>]; and (<b>f</b>) the proposed method.</p>
Full article ">Figure 24
<p>Results of Region 2: (<b>a</b>) complete data; (<b>b</b>) adding zeros operation; (<b>c</b>) PG-APES; (<b>d</b>) Burg algorithm; (<b>e</b>) the method in [<a href="#B23-remotesensing-11-02698" class="html-bibr">23</a>]; and (<b>f</b>) the proposed method.</p>
Full article ">Figure 25
<p>Results of Region 3: (<b>a</b>) complete data; (<b>b</b>) adding zeros operation; (<b>c</b>) PG-APES; (<b>d</b>) Burg algorithm; (<b>e</b>) the method in [<a href="#B23-remotesensing-11-02698" class="html-bibr">23</a>]; and (<b>f</b>) the proposed method.</p>
Full article ">
20 pages, 3636 KiB  
Article
Integrating LiDAR, Multispectral and SAR Data to Estimate and Map Canopy Height in Tropical Forests
by J. Camilo Fagua, Patrick Jantz, Susana Rodriguez-Buritica, Laura Duncanson and Scott J. Goetz
Remote Sens. 2019, 11(22), 2697; https://doi.org/10.3390/rs11222697 - 18 Nov 2019
Cited by 38 | Viewed by 6587
Abstract
Developing accurate methods to map vegetation structure in tropical forests is essential to protect their biodiversity and improve their carbon stock estimation. We integrated LIDAR (Light Detection and Ranging), multispectral and SAR (Synthetic Aperture Radar) data to improve the prediction and mapping of [...] Read more.
Developing accurate methods to map vegetation structure in tropical forests is essential to protect their biodiversity and improve their carbon stock estimation. We integrated LIDAR (Light Detection and Ranging), multispectral and SAR (Synthetic Aperture Radar) data to improve the prediction and mapping of canopy height (CH) at high spatial resolution (30 m) in tropical forests in South America. We modeled and mapped CH estimated from aircraft LiDAR surveys as a ground reference, using annual metrics derived from multispectral and SAR satellite imagery in a dry forest, a moist forest, and a rainforest of tropical South America. We examined the effect of the three forest types, five regression algorithms, and three predictor groups on the modelling and mapping of CH. Our CH models reached errors ranging from 1.2–3.4 m in the dry forest and 5.1–7.4 m in the rainforest and explained variances from 94–60% in the dry forest and 58–12% in the rainforest. Our best models show higher accuracies than previous works in tropical forests. The average accuracy of the five regression algorithms decreased from dry forests (2.6 m +/− 0.7) to moist (5.7 m +/− 0.4) and rainforests (6.6 m +/− 0.7). Random Forest regressions produced the most accurate models in the three forest types (1.2 m +/− 0.05 in the dry, 4.9 m +/− 0.14 in the moist, and 5.5 m +/− 0.3 the rainforest). Model performance varied considerably across the three predictor groups. Our results are useful for CH spatial prediction when GEDI (Global Ecosystem Dynamics Investigation lidar) data become available. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study Areas (<b>a</b>). Mato Grosso dry forest – MAT (<b>b</b>), Tapajós-Xingu moist forest – TAP (<b>c</b>), and Chocó-Darien rainforests -CHOCO (<b>d</b>).</p>
Full article ">Figure 2
<p>Illustration of LiDAR processing procedure in Tapajós-Xingu moist forest. (<b>a</b>) Distribution X-Z of LiDAR returns in a Landsat pixel area. (<b>b</b>) Three-dimensional view of CH-estimation using the LiDAR Returns. (<b>c</b>) Two-dimensional view of CH-estimation using the LiDAR Returns.</p>
Full article ">Figure 3
<p>Average annual curves of multispectral bands and vegetation indices for CHs &lt; 4 m and CHs &gt; 30 m in three tropical forest types: Mato Grosso dry forest (MAT), Tapajós-Xingu moist forest (TAP), and Chocó-Darien rainforest (CHOCO). To build these average annual curves, we applied a gaussian filter with a temporal window of 2 (annual corrected values) in each of the annual curves for smoothing and improving the visual differentiation of the curves. We then averaged these smoothed curves agree with the CHs and forest types.</p>
Full article ">Figure 4
<p>Accuracy (<b>a</b>) and explained variance (<b>b</b>) of the map models in three tropical forests: Mato Grosso dry forest (MAT), Tapajós-Xingu moist forest (TAP), and Chocó-Darien rainforests (CHOCO). Model accuracy was estimated by RMSEs (root mean squared error) in meters, while explained variance by R<sup>2</sup>. Five regression models were exanimated: random forest (RF), multivariate adaptive regression splines (MARS), liner regression (lm), Lasso and Elastic-Net Regularized Generalized Linear Models (GLM.net), and Support Vector Machine (SVM). Three groups of SAR and multispectral predictors also were evaluated: the first group formed by the 20 predictors variables (Vars.1), the second by 14 predictors that had significant correlations (significant <span class="html-italic">r</span> &gt; 0.2 or k &gt; 0.2) to the CH (Vars.2) (see <a href="#remotesensing-11-02697-t003" class="html-table">Table 3</a>), and the third group by no-collinear predictors with significant correlations (significant <span class="html-italic">r</span> &gt; 0.2 or k &gt; 0.2) to the CH (Vars.3).</p>
Full article ">Figure 5
<p>Maps of canopy height (CH) in three tropical forests: Mato Grosso dry forest (MAT), Tapajós-Xingu moist forest (TAP), and Chocó-Darien rainforests (CHOCO). Model accuracy was estimated by RMSEs after map constructions using groups independent of the model training groups. Three groups of SAR and multispectral predictors also were evaluated: the first group formed by the 20 predictors variables (Vars.1), the second by 14 predictors that had significant correlations (significant <span class="html-italic">r</span> &gt; 0.2 or k &gt; 0.2) to the CH (Vars.2) (see <a href="#remotesensing-11-02697-t003" class="html-table">Table 3</a>), and the third group by no-collinear predictors with significant correlations (significant <span class="html-italic">r</span> &gt; 0.2 or k &gt; 0.2) to the CH (Vars.3).</p>
Full article ">
22 pages, 7751 KiB  
Article
The Influence of Heterogeneity on Lunar Irradiance Based on Multiscale Analysis
by Xiangzhao Zeng and Chuanrong Li
Remote Sens. 2019, 11(22), 2696; https://doi.org/10.3390/rs11222696 - 18 Nov 2019
Cited by 3 | Viewed by 2909
Abstract
The Moon is a stable light source for the radiometric calibration of satellite sensors. It acts as a diffuse panel that reflects sunlight in all directions, however, the lunar surface is heterogeneous due to its topography and different mineral content and chemical composition [...] Read more.
The Moon is a stable light source for the radiometric calibration of satellite sensors. It acts as a diffuse panel that reflects sunlight in all directions, however, the lunar surface is heterogeneous due to its topography and different mineral content and chemical composition at different locations, resulting in different optical properties. In order to perform radiometric calibration using the Moon, a lunar irradiance model using different observation geometry is required. Currently, two lunar irradiance models exist, namely, the Robotic Lunar Observatory (ROLO) and the Miller and Turner 2009 (MT2009). The ROLO lunar irradiance model is widely used as the radiometric standard for on-orbit sensors. The MT2009 lunar irradiance model is popular for remote sensing at night, however, the original version of the MT2009 lunar irradiance model takes less consideration of the heterogeneous lunar surface and lunar topography. Since the heterogeneity embedded in the lunar surface is the key to the improvement of the lunar irradiance model, this study analyzes the influence of the heterogeneous surface on the irradiance of moonlight based on model data at different scales. A heterogeneous correction factor is defined to describe the impact of the heterogeneous lunar surface on lunar irradiance. On the basis of the analysis, the following conclusions can be made. First, the influence of heterogeneity in the waning hemisphere is greater than that in waxing hemisphere under all 32 wavelengths of the ROLO filters. Second, the influence of heterogeneity embedded in the lunar surface exerts less impact on lunar irradiance at lower resolution. Third, the heterogeneous correction factor is scale independent. Finally, the lunar irradiance uncertainty introduced by topography is very small and decreases as the resolution of model data decreases due to the loss of topographic information. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The multiscale digital elevation models (DEMs) derived from lunar orbital laser altimeter (LOLA) data: (<b>a</b>) 8 pixels per degree, (<b>b</b>) 4 pixels per degree, (<b>c</b>) 2 pixels per degree, (<b>d</b>) 1 pixel per degree, (<b>e</b>) 0.5 pixels per degree, (<b>f</b>) 0.25 pixels per degree, (<b>g</b>) 0.1 pixels per degree, and (<b>h</b>) 0.083 pixels per degree.</p>
Full article ">Figure 2
<p>The Tycho Crater at six different scales derived from LOLA data: (<b>a</b>) 8 pixels per degree, (<b>b</b>) 4 pixels per degree, (<b>c</b>) 2 pixels per degree, (<b>d</b>) 1 pixel per degree, (<b>e</b>) 0.5 pixels per degree, and (<b>f</b>) 0.25 pixels per degree.</p>
Full article ">Figure 3
<p>The average and standard deviation of slope at eight different scales.</p>
Full article ">Figure 4
<p>Generated multiscale lunar images: (<b>a</b>) 8 pixels per degree, (<b>b</b>) 4 pixels per degree, (<b>c</b>) 2 pixels per degree, (<b>d</b>) 1 pixel per degree, (<b>e</b>) 0.5 pixels per degree, (<b>f</b>) 0.25 pixels per degree, (<b>g</b>) 0.1 pixels per degree, and (<b>h</b>) 0.083 pixels per degree.</p>
Full article ">Figure 5
<p>The comparison between waxing and waning sides of the Moon: (<b>a</b>) lunar irradiance on the waxing and waning hemispheres of the Moon and (<b>b</b>) relative difference of lunar irradiance between waxing and waning hemispheres.</p>
Full article ">Figure 6
<p>Comparison between lunar irradiance at eight different scales: (<b>a</b>) comparison between reference scale of 8 pixels per degree, and the scales of 4 pixels per degree and 0.25 pixels per degree; (<b>b</b>) comparison between reference scale of 8 pixels per degree, and the scales of 2 pixels per degree, 0.5 pixels per degree, and 0.1 pixels per degree; (<b>c</b>) comparison between reference scale of 8 pixels per degree, and scales of 1 pixel per degree, and 0.083 pixels per degree.</p>
Full article ">Figure 7
<p>Time series of lunar irradiance at the 351.2 nm band under different waxing (panel <b>a</b>) and waning (panel <b>b</b>) lunar phases. The lunar phases are sampled at ±5, ±10, ±20, ±30, ±40, ±50, ±60, ±70, ±80, and ±90 degrees.</p>
Full article ">Figure 8
<p>Examples of model data at the scale of 8 pixels/degree: (<b>a</b>) model data with the influence of heterogeneity, (<b>b</b>) model data without the influence of topography, and (<b>c</b>) model data without the influence of heterogeneity.</p>
Full article ">Figure 9
<p>The heterogeneous correction factor under different lunar phases for six wavelengths of ROLO filters: (<b>a</b>) 351.19 nm, (<b>b</b>) 474.90 nm, (<b>c</b>) 745.30 nm, (<b>d</b>) 941.94 nm, (<b>e</b>) 1243.16 nm, and (<b>f</b>) 2125.54 nm.</p>
Full article ">Figure 10
<p>(<b>a</b>–<b>h</b>) The heterogeneous correction factors at eight different scale levels and (<b>i</b>) the absolute relative difference between the heterogeneous correction factor with and without the consideration of topography.</p>
Full article ">Figure 10 Cont.
<p>(<b>a</b>–<b>h</b>) The heterogeneous correction factors at eight different scale levels and (<b>i</b>) the absolute relative difference between the heterogeneous correction factor with and without the consideration of topography.</p>
Full article ">
18 pages, 5777 KiB  
Article
Multispectral Image Super-Resolution Burned-Area Mapping Based on Space-Temperature Information
by Peng Wang, Lei Zhang, Gong Zhang, Benzhou Jin and Henry Leung
Remote Sens. 2019, 11(22), 2695; https://doi.org/10.3390/rs11222695 - 18 Nov 2019
Cited by 9 | Viewed by 3172
Abstract
Multispectral imaging (MI) provides important information for burned-area mapping. Due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, hindering the accurate determination of burned areas. Super-resolution mapping (SRM) has [...] Read more.
Multispectral imaging (MI) provides important information for burned-area mapping. Due to the severe conditions of burned areas and the limitations of sensors, the resolution of collected multispectral images is sometimes very rough, hindering the accurate determination of burned areas. Super-resolution mapping (SRM) has been proposed for mapping burned areas in rough images to solve this problem, allowing super-resolution burned-area mapping (SRBAM). However, the existing SRBAM methods do not use sufficiently accurate space information and detailed temperature information. To improve the mapping accuracy of burned areas, an improved SRBAM method utilizing space–temperature information (STI) is proposed here. STI contains two elements, a space element and a temperature element. We utilized the random-walker algorithm (RWA) to characterize the space element, which encompassed accurate object space information, while the temperature element with rich temperature information was derived by calculating the normalized burn ratio (NBR). The two elements were then merged to produce an objective function with space–temperature information. The particle swarm optimization algorithm (PSOA) was employed to handle the objective function and derive the burned-area mapping results. The dataset of the Landsat-8 Operational Land Imager (OLI) from Denali National Park, Alaska, was used for testing and showed that the STI method is superior to the traditional SRBAM method. Full article
(This article belongs to the Special Issue New Advances on Sub-pixel Processing: Unmixing and Mapping Methods)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) False color image (short-wave infrared two bands, near-infrared band, and blue band for red, green, and blue, respectively). (<b>b</b>) Reference image.</p>
Full article ">Figure 2
<p>The flowchart of producing the space element. PCA: principal component analysis, PC: principal component, RWA: random-walker algorithm.</p>
Full article ">Figure 3
<p>The flowchart of the space–temperature information (STI) method. NBR: normalized burn ratio, POSA: particle swarm optimization algorithm.</p>
Full article ">Figure 4
<p>Fine image of five burned areas. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 4 Cont.
<p>Fine image of five burned areas. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 5
<p>Flowchart of the experimental process. SRM: super-resolution mapping.</p>
Full article ">Figure 6
<p>Rough image of five burned areas. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 6 Cont.
<p>Rough image of five burned areas. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 7
<p>Burned-area mapping results in area 1. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) hybrid spatial attraction model (HSAM), (<b>c</b>) object-scale spatial SRM (OSRM), (<b>d</b>) super-resolution burned-area mapping (SRBAM), (<b>e</b>) STI.</p>
Full article ">Figure 7 Cont.
<p>Burned-area mapping results in area 1. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) hybrid spatial attraction model (HSAM), (<b>c</b>) object-scale spatial SRM (OSRM), (<b>d</b>) super-resolution burned-area mapping (SRBAM), (<b>e</b>) STI.</p>
Full article ">Figure 8
<p>Burned-area mapping results in area 2. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) HSAM, (<b>c</b>) OSRM, (<b>d</b>) SRBAM, (<b>e</b>) STI.</p>
Full article ">Figure 9
<p>Burned-area mapping results in area 3. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) HSAM, (<b>c</b>) OSRM, (<b>d</b>) SRBAM, (<b>e</b>) STI.</p>
Full article ">Figure 10
<p>Burned-area mapping results in area 4. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) HSAM, (<b>c</b>) OSRM, (<b>d</b>) SRBAM, (<b>e</b>) STI.</p>
Full article ">Figure 11
<p>Burned-area mapping results in area 5. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) HSAM, (<b>c</b>) OSRM, (<b>d</b>) SRBAM, (<b>e</b>) STI.</p>
Full article ">Figure 11 Cont.
<p>Burned-area mapping results in area 5. (<b>a</b>) Reference image. Images obtained by (<b>b</b>) HSAM, (<b>c</b>) OSRM, (<b>d</b>) SRBAM, (<b>e</b>) STI.</p>
Full article ">Figure 12
<p>Burned area (%) derived using the four methods tested for different values of <math display="inline"> <semantics> <mi>S</mi> </semantics> </math> (<b>a</b>) <math display="inline"> <semantics> <mrow> <mi>S</mi> <mrow> <mo>=</mo> <mn>5</mn> </mrow> </mrow> </semantics> </math> and (<b>b</b>) <math display="inline"> <semantics> <mrow> <mi>S</mi> <mrow> <mo>=</mo> <mn>10</mn> </mrow> </mrow> </semantics> </math>.</p>
Full article ">Figure 13
<p>Burned area (%) derived using the four methods tested for different values of the weight parameter <math display="inline"> <semantics> <mi>θ</mi> </semantics> </math>. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 14
<p>Burned area (%) derived using the four methods tested for different values of the segmentation scale parameter <math display="inline"> <semantics> <mi>Q</mi> </semantics> </math>. (<b>a</b>) Area 1, (<b>b</b>) Area 2, (<b>c</b>) Area 3, (<b>d</b>) Area 4, (<b>e</b>) Area 5.</p>
Full article ">Figure 15
<p>Operation time (s) in relation to the four SRM methods.</p>
Full article ">
22 pages, 8790 KiB  
Article
Enhanced Feature Extraction for Ship Detection from Multi-Resolution and Multi-Scene Synthetic Aperture Radar (SAR) Images
by Fei Gao, Wei Shi, Jun Wang, Erfu Yang and Huiyu Zhou
Remote Sens. 2019, 11(22), 2694; https://doi.org/10.3390/rs11222694 - 18 Nov 2019
Cited by 29 | Viewed by 4347
Abstract
Independent of daylight and weather conditions, synthetic aperture radar (SAR) images have been widely used for ship monitoring. The traditional methods for SAR ship detection are highly dependent on the statistical models of sea clutter or some predefined thresholds, and generally require a [...] Read more.
Independent of daylight and weather conditions, synthetic aperture radar (SAR) images have been widely used for ship monitoring. The traditional methods for SAR ship detection are highly dependent on the statistical models of sea clutter or some predefined thresholds, and generally require a multi-step operation, which results in time-consuming and less robust ship detection. Recently, deep learning algorithms have found wide applications in ship detection from SAR images. However, due to the multi-resolution imaging mode and complex background, it is hard for the network to extract representative SAR target features, which limits the ship detection performance. In order to enhance the feature extraction ability of the network, three improvement techniques have been developed. Firstly, multi-level sparse optimization of SAR image is carried out to handle clutters and sidelobes so as to enhance the discrimination of the features of SAR images. Secondly, we hereby propose a novel split convolution block (SCB) to enhance the feature representation of small targets, which divides the SAR images into smaller sub-images as the input of the network. Finally, a spatial attention block (SAB) is embedded in the feature pyramid network (FPN) to reduce the loss of spatial information, during the dimensionality reduction process. In this paper, experiments on the multi-resolution SAR images of GaoFen-3 and Sentinel-1 under complex backgrounds are carried out and the results verify the effectiveness of SCB and SAB. The comparison results also show that the proposed method is superior to several state-of-the-art object detection algorithms. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>): The overall framework of the proposed method; (<b>b</b>): The architecture of 2S-RetinaNet.</p>
Full article ">Figure 2
<p>SAR images reconstructed by sparse optimization. (<b>a</b>), (<b>b</b>), (<b>c</b>) and (<b>d</b>) denote the original image patch, the result of sparse optimization reconstruction with 50 iterations, the result of sparse optimization reconstruction with 80 iterations, the fusion result of the original image and the two of sparse optimization reconstruction results, respectively. (<b>e</b>), (<b>f</b>), (<b>g</b>) and (<b>h</b>) display the second image patch example and its corresponding results, respectively. (<b>i</b>), (<b>j</b>), (<b>k</b>) and (<b>l</b>) exhibit the third image patch example and its corresponding results, respectively.</p>
Full article ">Figure 3
<p>The architecture of RetinaNet: The green parts are the pyramidal features extracted by feature pyramid networks (FPN). The yellow and purple parts are the classification and the box regression subnets, respectively. (<b>a</b>) indicates the feedforward network. (<b>b</b>) is feature pyramid net to obtain the multi-scale features. (<b>c</b>) illustrates that there are two subnets, i.e., the top yellow one for classification and the purple bottom for bounding box regression.</p>
Full article ">Figure 4
<p>The architecture of FPN: FPN: (i = 1, 2 … 4) is used to denote the convolutional block. The skipped connections apply a 1 × 1 convolution filter to reduce the channels of Ci (i = 2, 3, 4) to 256. During the top-down flow, the number of corresponding (i = 2, 3) channels is firstly reduced to 256 and then the layer is combined by up-sampling the previous layer. Finally, a 3 × 3 convolution filter is used to obtain the feature maps Pi (i = 2, 3, 4) for the classes and their locations.</p>
Full article ">Figure 5
<p>The relative bounding-box size distribution of the training set.</p>
Full article ">Figure 6
<p>The bottom-up pathway of FPN embedded with SCB.</p>
Full article ">Figure 7
<p>The location of SAB in FPN and the diagram of SAB.</p>
Full article ">Figure 8
<p>Some samples of ship chips. (<b>a</b>), (<b>b</b>), (<b>c</b>), (<b>d</b>) and (<b>e</b>) are cropped from Gaofen-3 images. (<b>f</b>), (<b>g</b>), (<b>h</b>), (<b>i</b>) and (<b>j</b>) are cropped from Sentinel-1 images.</p>
Full article ">Figure 9
<p>The PR curves of the four methods.</p>
Full article ">Figure 10
<p>The visualization of split convolution results. (<b>a</b>), (<b>e</b>), (<b>i</b>) and (<b>m</b>) denote the original image patch, the split convolution result, the normal convolution result, the fusion result of split convolution and normal convolution, respectively. (<b>b</b>), (<b>f</b>), (<b>j</b>) and (<b>n</b>) display the second image patch example and its corresponding results, respectively. (<b>c</b>), (<b>g</b>), (<b>k</b>) and (<b>o</b>) exhibit the third image patch example and its corresponding results, respectively. (<b>d</b>), (<b>h</b>), (<b>l</b>) and (<b>p</b>) show the fourth example image patch and its corresponding results, respectively.</p>
Full article ">Figure 11
<p>The visualization of spatial attention results. (<b>a</b>), (<b>e</b>) and (<b>i</b>) denote the original image patch, the spatial attention map, the original image patch masked by the spatial attention map, respectively. (<b>b</b>), (<b>f</b>) and (<b>j</b>) display the second image patch example and its corresponding results, respectively. (<b>c</b>), (<b>g</b>) and (<b>k</b>) exhibit the third image patch example and its corresponding results, respectively. (<b>d</b>), (<b>h</b>) and (<b>i</b>) show the fourth example image patch and its corresponding results, respectively.</p>
Full article ">Figure 12
<p>Detection results on four image patches, shown by the red rectangles. The green rectangles are the ground truth. (<b>a</b>), (<b>e</b>), (<b>i</b>), (<b>m</b>) and (<b>q</b>) denote the original image patch, the 2S-RetinaNet detection result, the Faster R-CNN detection result, the YOLOv3 detection result, and the bounding boxes detected by the SSD, respectively. (<b>b</b>), (<b>f</b>), (<b>j</b>), (<b>n</b>) and (<b>r</b>) display the second image patch example and its corresponding results, respectively. (<b>c</b>), (<b>g</b>), (<b>k</b>), (<b>o</b>) and (<b>s</b>) exhibit the third image patch example and its corresponding results, respectively. (<b>d</b>), (<b>h</b>), (<b>l</b>), (<b>p</b>) and (<b>t</b>) show the fourth example image patch and its corresponding results, respectively.</p>
Full article ">
17 pages, 5080 KiB  
Article
Modeling and Assessment of GPS/Galileo/BDS Precise Point Positioning with Ambiguity Resolution
by Xuexi Liu, Hua Chen, Weiping Jiang, Ruijie Xi, Wen Zhao, Chuanfeng Song and Xingyu Zhou
Remote Sens. 2019, 11(22), 2693; https://doi.org/10.3390/rs11222693 - 18 Nov 2019
Cited by 7 | Viewed by 3264
Abstract
Multi-frequency and multi-GNSS integration is currently becoming an important trend in the development of satellite navigation and positioning technology. In this paper, GPS/Galileo/BeiDou (BDS) precise point positioning (PPP) with ambiguity resolution (AR) are discussed in detail. The mathematical model of triple-system PPP AR [...] Read more.
Multi-frequency and multi-GNSS integration is currently becoming an important trend in the development of satellite navigation and positioning technology. In this paper, GPS/Galileo/BeiDou (BDS) precise point positioning (PPP) with ambiguity resolution (AR) are discussed in detail. The mathematical model of triple-system PPP AR and the principle of fractional cycle bias (FCB) estimation are firstly described. With the data of 160 stations in Multi-GNSS Experiment (MGEX) from day of year (DOY) 321-350, 2018, the FCBs of the three systems are estimated and the experimental results show that the range of most GPS wide-lane (WL) FCB is within 0.1 cycles during one month, while that of Galileo WL FCB is 0.05 cycles. For BDS FCB, the classification estimation method is used to estimate the BDS FCB and divide it into GEO and non-GEO (IGSO and MEO) FCB. The variation range of BDS GEO WL FCB can reach 0.5 cycles, while BDS non-GEO WL FCB does not exceed 0.1 cycles within a month. However, the accuracies of GPS, Galileo, and BDS non-GEO narrow-lane (NL) FCB are basically the same. In addition, the number of visible satellites and Position Dilution of Precision (PDOP) values of different combined systems are analyzed and evaluated in this paper. It shows that the triple-system combination can significantly increase the number of observable satellites, optimize the spatial distribution structure of satellites, and is significantly superior to the dual-system and single-system. Finally, the positioning characteristics of single-, dual-, and triple-systems are analyzed. The results of the single station positioning experiment show that the accuracy and convergence speed of the fixed solutions for each system are better than those of the corresponding float solutions. The average root mean squares (RMSs) of the float and the fixed solution in the east and north direction for GPS/Galileo/BDS combined system are the smallest, being 0.92 cm, 0.52 cm and 0.50 cm, 0.46 cm respectively, while the accuracy of the GPS in the up direction is the highest, which is 1.44 cm and 1.27 cm, respectively. Therefore, the combined system can accelerate the convergence speed and greatly enhance the stability of the positioning results. Full article
(This article belongs to the Special Issue Global Navigation Satellite Systems for Earth Observing System)
Show Figures

Figure 1

Figure 1
<p>Geographic distribution of the reference network. Red triangles denote the 160 stations used for GPS/Galileo/BDS FCB estimation, while yellow stars denote the eight stations used to evaluate the performance of the combined systems.</p>
Full article ">Figure 2
<p>GPS WL FCB from DOY 321 to 350, 2018, referenced to G01.</p>
Full article ">Figure 3
<p>GPS NL FCB on DOY 328, 2018, referenced to G01.</p>
Full article ">Figure 4
<p>Galileo WL FCB from DOY 321 to 350, 2018, referenced to E01.</p>
Full article ">Figure 5
<p>Galileo NL FCB on DOY 328, 2018, referenced to E01.</p>
Full article ">Figure 6
<p>BDS WL FCB from DOY 321 to 350, 2018. Top panel: BDS GEO. Bottom panel: BDS non-GEO. C02 and C06 are taken as the reference for BDS GEO and non-GEO respectively.</p>
Full article ">Figure 7
<p>BDS NL FCB on DOY 328, 2018. Top panel: BDS GEO. Bottom panel: BDS non-GEO. C02 and C06 are taken as the reference for BDS GEO and non-GEO respectively.</p>
Full article ">Figure 8
<p>Global distribution of the visible satellite number for seven different constellation combinations on DOY 321, 2018.</p>
Full article ">Figure 9
<p>Global distribution of the satellite PDOP values for seven different constellation combinations on DOY 321, 2018.</p>
Full article ">Figure 10
<p>Comparison of the float and fixed PPP results from single-system and combined solutions in the E, N, and U components, respectively, at station KAT1 on DOY 324, 2018.</p>
Full article ">Figure 11
<p>Comparison of the fixed results for the seven systems in the E, N, and U components, respectively, at station KAT1 on DOY 324, 2018.</p>
Full article ">Figure 12
<p>The average RMS of float PPP solution from DOY 321 to 350, 2018.</p>
Full article ">Figure 13
<p>The average RMS of the fixed PPP solution from DOY 321 to 350, 2018.</p>
Full article ">Figure 14
<p>The average RMS of float and fixed PPP solution for seven systems from DOY 321 to 350, 2018.</p>
Full article ">
16 pages, 7186 KiB  
Article
Adaptive Least-Squares Collocation Algorithm Considering Distance Scale Factor for GPS Crustal Velocity Field Fitting and Estimation
by Wei Qu, Hailu Chen, Shichuan Liang, Qin Zhang, Lihua Zhao, Yuan Gao and Wu Zhu
Remote Sens. 2019, 11(22), 2692; https://doi.org/10.3390/rs11222692 - 18 Nov 2019
Cited by 4 | Viewed by 3163
Abstract
High-precision, high-reliability, and high-density GPS crustal velocity are extremely important requirements for geodynamic analysis. The least-squares collocation algorithm (LSC) has unique advantages over crustal movement models to overcome observation errors in GPS data and the sparseness and poor geometric distribution in GPS observations. [...] Read more.
High-precision, high-reliability, and high-density GPS crustal velocity are extremely important requirements for geodynamic analysis. The least-squares collocation algorithm (LSC) has unique advantages over crustal movement models to overcome observation errors in GPS data and the sparseness and poor geometric distribution in GPS observations. However, traditional LSC algorithms often encounter negative covariance statistics, and thus, calculating statistical Gaussian covariance function based on the selected distance interval leads to inaccurate estimation of the correlation between the random signals. An unreliable Gaussian statistical covariance function also leads to inconsistency in observation noise and signal variance. In this study, we present an improved LSC algorithm that takes into account the combination of distance scale factor and adaptive adjustment to overcome these problems. The rationality and practicability of the new algorithm was verified by using GPS observations. Results show that the new algorithm introduces the distance scale factor, which effectively weakens the influence of systematic errors by improving the function model. The new algorithm can better reflect the characteristics of GPS crustal movement, which can provide valuable basic data for use in the analysis of regional tectonic dynamics using GPS observations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The frame diagram of the an adaptive least-squares algorithm considering the distance scale factor (ADLSC algorithm).</p>
Full article ">Figure 2
<p>The GPS velocities for 1999-2013 (mm/a) and the location of the Sichuan-Yunnan block. Red rectangle outlines the study area in mainland China (<b>a</b>). Gray solid lines represent the major faults [<a href="#B29-remotesensing-11-02692" class="html-bibr">29</a>] (<b>b</b>). Red triangles represent major cities (capital cities) (<b>b</b>). Black arrows are the observed GPS velocities, and the red color at the GPS station represents the checking points (<b>b</b>).</p>
Full article ">Figure 3
<p>Residual between the estimated and observed velocity of the checking points: (<b>a</b>) east direction; (<b>b</b>) north direction.</p>
Full article ">Figure 4
<p>Residual between the estimated and observed velocity of the fitting points in the (<b>a</b>) east and (<b>b</b>) north.</p>
Full article ">Figure 5
<p>Correlation analysis between the fitting and observed velocity in the east (<b>a1</b>, <b>b1</b>, <b>c1</b>, and <b>d1</b>) and north directions (<b>a2</b>, <b>b2</b>, <b>c2</b>, and <b>d2</b>).</p>
Full article ">Figure 5 Cont.
<p>Correlation analysis between the fitting and observed velocity in the east (<b>a1</b>, <b>b1</b>, <b>c1</b>, and <b>d1</b>) and north directions (<b>a2</b>, <b>b2</b>, <b>c2</b>, and <b>d2</b>).</p>
Full article ">Figure 6
<p>The different and random selection of checking points in (<b>a</b>) experiment 2 and (<b>b</b>) experiment.</p>
Full article ">Figure 7
<p>Optimal distance scale factor of the ADLSC in the (<b>a</b>) east direction and (<b>b</b>) north direction.</p>
Full article ">Figure 8
<p>Comparison of the signal variance estimated results by the first Helmert VCEM calculation of the two algorithms under different noise levels: (<b>a</b>) east direction; (<b>b</b>) north direction.</p>
Full article ">Figure 9
<p>Comparison of the iteration times and fitting RMS in the iterative process of the VCEM calculation: (<b>a</b>) east direction; (<b>b</b>) north direction.</p>
Full article ">
24 pages, 2645 KiB  
Article
Hyperspectral Pansharpening Based on Spectral Constrained Adversarial Autoencoder
by Gang He, Jiaping Zhong, Jie Lei, Yunsong Li and Weiying Xie
Remote Sens. 2019, 11(22), 2691; https://doi.org/10.3390/rs11222691 - 18 Nov 2019
Cited by 14 | Viewed by 3375
Abstract
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both [...] Read more.
Hyperspectral (HS) imaging is conducive to better describing and understanding the subtle differences in spectral characteristics of different materials due to sufficient spectral information compared with traditional imaging systems. However, it is still challenging to obtain high resolution (HR) HS images in both the spectral and spatial domains. Different from previous methods, we first propose spectral constrained adversarial autoencoder (SCAAE) to extract deep features of HS images and combine with the panchromatic (PAN) image to competently represent the spatial information of HR HS images, which is more comprehensive and representative. In particular, based on the adversarial autoencoder (AAE) network, the SCAAE network is built with the added spectral constraint in the loss function so that spectral consistency and a higher quality of spatial information enhancement can be ensured. Then, an adaptive fusion approach with a simple feature selection rule is induced to make full use of the spatial information contained in both the HS image and PAN image. Specifically, the spatial information from two different sensors is introduced into a convex optimization equation to obtain the fusion proportion of the two parts and estimate the generated HR HS image. By analyzing the results from the experiments executed on the tested data sets through different methods, it can be found that, in CC, SAM, and RMSE, the performance of the proposed algorithm is improved by about 1.42%, 13.12%, and 29.26% respectively on average which is preferable to the well-performed method HySure. Compared to the MRA-based method, the improvement of the proposed method in in the above three indexes is 17.63%, 0.83%, and 11.02%, respectively. Moreover, the results are 0.87%, 22.11%, and 20.66%, respectively, better than the PCA-based method, which fully illustrated the superiority of the proposed method in spatial information preservation. All the experimental results demonstrate that the proposed method is superior to the state-of-the-art fusion methods in terms of subjective and objective evaluations. Full article
(This article belongs to the Special Issue Remote Sensing Image Restoration and Reconstruction)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The overall flowchart of the proposed SCAAE based pansharpening approach.</p>
Full article ">Figure 2
<p>The process of feature extraction by SCAAE.</p>
Full article ">Figure 3
<p>CC and SAM curves as functions of the number of hidden nodes and depth for the Moffett Field, Salinas Scene, Pavia University, and Chikusei data sets.</p>
Full article ">Figure 4
<p>The intermediate results of 30 hidden nodes in visual by the SCAAE.</p>
Full article ">Figure 5
<p>Visual results obtained by different methods on the Moffett data set: (<b>a</b>) ground truth, (<b>b</b>) up-sampled HSI, (<b>c</b>) PAN, (<b>d</b>) SFIM, (<b>e</b>) MTF-GLP, (<b>f</b>) MTF-GLP-HPM, (<b>g</b>) GS, (<b>h</b>) GSA, (<b>i</b>) GFPCA, (<b>j</b>) CNMF, (<b>k</b>) Lanaras’s, (<b>l</b>) FUSE, (<b>m</b>) HySure, and (<b>n</b>) SCAAE. Note that the false color image is chosen for clear visualization (red: 10, green: 30, and blue: 50).</p>
Full article ">Figure 6
<p>Absolute difference maps between the pansharpened results and the reference one obtained by different methods on the Moffett data set: (<b>a</b>) reference, (<b>b</b>) SFIM, (<b>c</b>) MTF-GLP, (<b>d</b>) MTF-GLP-HPM, (<b>e</b>) GS, (<b>f</b>) GSA, (<b>g</b>) GFPCA, (<b>h</b>) CNMF, (<b>i</b>) Lanaras’s, (<b>j</b>) FUSE, (<b>k</b>) HySure, (<b>l</b>) SCAAE.</p>
Full article ">Figure 7
<p>Visual results obtained by different methods on the Salinas Scene data set: (<b>a</b>) ground truth, (<b>b</b>) up-sampled HSI, (<b>c</b>) PAN, (<b>d</b>) SFIM, (<b>e</b>) MTF-GLP, (<b>f</b>) MTF-GLP-HPM, (<b>g</b>) GS, (<b>h</b>) GSA, (<b>i</b>) GFPCA, (<b>j</b>) CNMF, (<b>k</b>) Lanaras’s, (<b>l</b>) FUSE, (<b>m</b>) HySure, (<b>n</b>) SCAAE. Note that the false color image is chosen for clear visualization (red: 20, green: 40, and blue: 80).</p>
Full article ">Figure 8
<p>Absolute difference maps between the pansharpened results and the reference one obtained by different methods on the Salinas Scene data set: (<b>a</b>) reference, (<b>b</b>) SFIM, (<b>c</b>) MTF-GLP, (<b>d</b>) MTF-GLP-HPM, (<b>e</b>) GS, (<b>f</b>) GSA, (<b>g</b>) GFPCA, (<b>h</b>) CNMF, (<b>i</b>) Lanaras’s, (<b>j</b>) FUSE, (<b>k</b>) HySure, (<b>l</b>) SCAAE.</p>
Full article ">Figure 9
<p>Visual results obtained by different methods on the University of Pavia data set: (<b>a</b>) ground truth, (<b>b</b>) up-sampled HSI, (<b>c</b>) PAN, (<b>d</b>) SFIM, (<b>e</b>) MTF-GLP, (<b>f</b>) MTF-GLP-HPM, (<b>g</b>) GS, (<b>h</b>) GSA, (<b>i</b>) GFPCA, (<b>j</b>) CNMF, (<b>k</b>) Lanaras’s, (<b>l</b>) FUSE, (<b>m</b>) HySure, (<b>n</b>) SCAAE. Note that the false color image is chosen for clear visualization (red: 20, green: 40, and blue: 80).</p>
Full article ">Figure 10
<p>Absolute difference maps between the pansharpened results and the reference one obtained by different methods on the University of Pavia data set: (<b>a</b>) reference, (<b>b</b>) SFIM, (<b>c</b>) MTF-GLP, (<b>d</b>) MTF-GLP-HPM, (<b>e</b>) GS, (<b>f</b>) GSA, (<b>g</b>) GFPCA, (<b>h</b>) CNMF, (<b>i</b>) Lanaras’s, (<b>j</b>) FUSE, (<b>k</b>) HySure, (<b>l</b>) SCAAE.</p>
Full article ">Figure 11
<p>Visual results obtained by different methods on the Chikusei data set: (<b>a</b>) ground truth, (<b>b</b>) up-sampled HSI, (<b>c</b>) PAN, (<b>d</b>) SFIM, (<b>e</b>) MTF-GLP, (<b>f</b>) MTF-GLP-HPM, (<b>g</b>) GS, (<b>h</b>) GSA, (<b>i</b>) GFPCA, (<b>j</b>) CNMF, (<b>k</b>) Lanaras’s, (<b>l</b>) FUSE, (<b>m</b>) HySure, (<b>n</b>) SCAAE. Note that the false color image is chosen for clear visualization (red: 20, green: 40, and blue: 80).</p>
Full article ">Figure 12
<p>Absolute difference maps between the pansharpened results and the reference one obtained by different methods on the Chikusei data set: (<b>a</b>) reference, (<b>b</b>) SFIM, (<b>c</b>) MTF-GLP, (<b>d</b>) MTF-GLP-HPM, (<b>e</b>) GS, (<b>f</b>) GSA, (<b>g</b>) GFPCA, (<b>h</b>) CNMF, (<b>i</b>) Lanaras’s, (<b>j</b>) FUSE, (<b>k</b>) HySure, (<b>l</b>) SCAAE.</p>
Full article ">
26 pages, 14207 KiB  
Article
Fine-Grained Classification of Hyperspectral Imagery Based on Deep Learning
by Yushi Chen, Lingbo Huang, Lin Zhu, Naoto Yokoya and Xiuping Jia
Remote Sens. 2019, 11(22), 2690; https://doi.org/10.3390/rs11222690 - 18 Nov 2019
Cited by 12 | Viewed by 4277
Abstract
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is [...] Read more.
Hyperspectral remote sensing obtains abundant spectral and spatial information of the observed object simultaneously. It is an opportunity to classify hyperspectral imagery (HSI) with a fine-grained manner. In this study, the fine-grained classification of HSI, which contains a large number of classes, is investigated. On one hand, traditional classification methods cannot handle fine-grained classification of HSI well; on the other hand, deep learning methods have shown their powerfulness in fine-grained classification. So, in this paper, deep learning is explored for HSI supervised and semi-supervised fine-grained classification. For supervised HSI fine-grained classification, densely connected convolutional neural network (DenseNet) is explored for accurate classification. Moreover, DenseNet is combined with pre-processing technique (i.e., principal component analysis or auto-encoder) or post-processing technique (i.e., conditional random field) to further improve classification performance. For semi-supervised HSI fine-grained classification, a generative adversarial network (GAN), which includes a discriminative CNN and a generative CNN, is carefully designed. The GAN fully uses the labeled and unlabeled samples to improve classification accuracy. The proposed methods were tested on the Indian Pines data set, which contains 33,3951 samples with 52 classes. The experimental results show that the deep learning-based methods provide great improvements compared with other traditional methods, which demonstrate that deep models have huge potential for HSI fine-grained classification. Full article
(This article belongs to the Special Issue Deep Learning and Feature Mining Using Hyperspectral Imagery)
Show Figures

Figure 1

Figure 1
<p>The framework of DenseNet for HSI supervised fine-grained classification.</p>
Full article ">Figure 2
<p>A four-layer dense block.</p>
Full article ">Figure 3
<p>The framework of HSI semi-supervised fine-grained classification.</p>
Full article ">Figure 4
<p>The Indian Pines dataset. (<b>a</b>) False-color composite image (Band 40, 25, 10); (<b>b</b>) ground reference map.</p>
Full article ">Figure 5
<p>Test accuracy of different supervised methods on the Indian Pines dataset.</p>
Full article ">Figure 6
<p>Test accuracy of different supervised methods with changed number of training samples.</p>
Full article ">Figure 7
<p>Test accuracy of different semi-supervised methods with changed number of training samples.</p>
Full article ">Figure 8
<p>(<b>a</b>) False-color composite image of Indian Pines dataset; The classification maps using (<b>b</b>) EMP-RF; (<b>c</b>) PCA-DenseNet; (<b>d</b>) DenseNet-CRF; (<b>e</b>) Semi-GAN.</p>
Full article ">
Previous Issue
Back to TopTop