[go: up one dir, main page]

Next Issue
Volume 12, September-2
Previous Issue
Volume 12, August-2
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 12, Issue 17 (September-1 2020) – 199 articles

Cover Story (view full-size image): High-resolution orthorectified imagery acquired by aerial or satellite sensors is well known to be a rich source of information that enables building detection, urban monitoring, and planning, among others. One can also extract roof geometry and height that is essential for the creation of digital twins and the placement of solar panels. In this work, we extract building heights in a fully automated mode from a single airborne or satellite optical image without relying on any further imagery, point clouds, or GIS records data. To this end, we propose a convolutional neural network architecture called IM2ELEVATION that produces an estimated digital surface model (DSM) image as output. Our model achieves state-of-the-art performance and we show that the quality of the training dataset is key to the successful estimation of DSM. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
19 pages, 6166 KiB  
Article
Application of UAV Photogrammetry with LiDAR Data to Facilitate the Estimation of Tree Locations and DBH Values for High-Value Timber Species in Northern Japanese Mixed-Wood Forests
by Kyaw Thu Moe, Toshiaki Owari, Naoyuki Furuya, Takuya Hiroshima and Junko Morimoto
Remote Sens. 2020, 12(17), 2865; https://doi.org/10.3390/rs12172865 - 3 Sep 2020
Cited by 26 | Viewed by 5866
Abstract
High-value timber species play an important economic role in forest management. The individual tree information for such species is necessary for practical forest management and for conservation purposes. Digital aerial photogrammetry derived from an unmanned aerial vehicle (UAV-DAP) can provide fine spatial and [...] Read more.
High-value timber species play an important economic role in forest management. The individual tree information for such species is necessary for practical forest management and for conservation purposes. Digital aerial photogrammetry derived from an unmanned aerial vehicle (UAV-DAP) can provide fine spatial and spectral information, as well as information on the three-dimensional (3D) structure of a forest canopy. Light detection and ranging (LiDAR) data enable area-wide 3D tree mapping and provide accurate forest floor terrain information. In this study, we evaluated the potential use of UAV-DAP and LiDAR data for the estimation of individual tree location and diameter at breast height (DBH) values of large-size high-value timber species in northern Japanese mixed-wood forests. We performed multiresolution segmentation of UAV-DAP orthophotographs to derive individual tree crown. We used object-based image analysis and random forest algorithm to classify the forest canopy into five categories: three high-value timber species, other broadleaf species, and conifer species. The UAV-DAP technique produced overall accuracy values of 73% and 63% for classification of the forest canopy in two forest management sub-compartments. In addition, we estimated individual tree DBH Values of high-value timber species through field survey, LiDAR, and UAV-DAP data. The results indicated that UAV-DAP can predict individual tree DBH Values, with comparable accuracy to DBH prediction using field and LiDAR data. The results of this study are useful for forest managers when searching for high-value timber trees and estimating tree size in large mixed-wood forests and can be applied in single-tree management systems for high-value timber species. Full article
(This article belongs to the Special Issue 3D Point Clouds in Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. (<b>a</b>) The University of Tokyo Hokkaido Forest. (<b>b</b>) Sub-compartment 36B and 59A. (<b>c</b>) Measured trees in sub-compartment 36B. (<b>d</b>) Measured trees in sub-compartment 59A.</p>
Full article ">Figure 2
<p>Correlations between crown areas derived from field measurements, manual delineation, and multiresolution segmentation of UAV orthophotographs. CA represents crown area. The first, second, and third row represent monarch birch, castor aralia, and Japanese oak, respectively. RS-derived CA represents CA derived from multiresolution segmentation.</p>
Full article ">Figure 3
<p>Individual tree crown results derived from multiresolution segmentation for three high-value timber species: (<b>a</b>) monarch birch, (<b>b</b>) castor aralia, and (<b>c</b>) Japanese oak in sub-compartment 36B; (<b>d</b>) monarch birch, (<b>e</b>) castor aralia, and (<b>f</b>) Japanese oak in sub-compartment 59A.</p>
Full article ">Figure 4
<p>Species separability boxplots for classification of the forest canopy in sub-compartment 36B.</p>
Full article ">Figure 5
<p>Species separability boxplots for classification of the forest canopy in sub-compartment 59A.</p>
Full article ">Figure 6
<p>Individual tree crowns derived from the RF classification, where each tree crown also indicates the individual tree’s spatial position: (<b>a</b>) sub-compartment 36B and (<b>b</b>) sub-compartment 59A.</p>
Full article ">Figure 7
<p>Observed and predicted DBH Values for monarch birch (first row), castor aralia (middle row), and Japanese oak (third row).</p>
Full article ">
23 pages, 66606 KiB  
Article
GSCA-UNet: Towards Automatic Shadow Detection in Urban Aerial Imagery with Global-Spatial-Context Attention Module
by Yuwei Jin, Wenbo Xu, Zhongwen Hu, Haitao Jia, Xin Luo and Donghang Shao
Remote Sens. 2020, 12(17), 2864; https://doi.org/10.3390/rs12172864 - 3 Sep 2020
Cited by 24 | Viewed by 4558
Abstract
As an inevitable phenomenon in most optical remote-sensing images, the effect of shadows is prominent in urban scenes. Shadow detection is critical for exploiting shadows and recovering the distorted information. Unfortunately, in general, automatic shadow detection methods for urban aerial images cannot achieve [...] Read more.
As an inevitable phenomenon in most optical remote-sensing images, the effect of shadows is prominent in urban scenes. Shadow detection is critical for exploiting shadows and recovering the distorted information. Unfortunately, in general, automatic shadow detection methods for urban aerial images cannot achieve satisfactory performance due to the limitation of feature patterns and the lack of consideration of non-local contextual information. To address this challenging problem, the global-spatial-context-attention (GSCA) module was developed to self-adaptively aggregate all global contextual information over the spatial dimension for each pixel in this paper. The GSCA module was embedded into a modified U-shaped encoder–decoder network that was derived from the UNet network to output the final shadow predictions. The network was trained on a newly created shadow detection dataset, and the binary cross-entropy (BCE) loss function was modified to enhance the training procedure. The performance of the proposed method was evaluated on several typical urban aerial images. Experiment results suggested that the proposed method achieved a better trade-off between automaticity and accuracy. The F1-score, overall accuracy, balanced-error-rate, and intersection-over-union metrics of the proposed method were higher than those of other state-of-the-art shadow detection methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pipeline of the proposed method for shadow detection in urban aerial images.</p>
Full article ">Figure 2
<p>Instances of the synthetic adversarial images using proposed method. (<b>a</b>) Original image. (<b>b</b>) Reference shadow map. (<b>c</b>–<b>h</b>) Generated shaded-variant results when <math display="inline"><semantics> <mrow> <mi>γ</mi> <mo>=</mo> <mfenced separators="" open="{" close="}"> <mn>0.6</mn> <mo>,</mo> <mn>0.7</mn> <mo>,</mo> <mn>0.8</mn> <mo>,</mo> <mn>1.2</mn> <mo>,</mo> <mn>1.4</mn> <mo>,</mo> <mn>1.6</mn> </mfenced> </mrow> </semantics></math>, respectively. (<b>i</b>) Generated disturbed-dark-object result. Dark pentagon in panel (<b>i</b>) is the synthetic dark object.</p>
Full article ">Figure 3
<p>Challenging cases of shadow detection in urban aerial images. Red rectangle, shadow region; blue rectangle, non-shadow region. (<b>a</b>,<b>b</b>). Left picture, original image; two right small pictures, close-up of selected shadow and non-shadow regions. (<b>c</b>) Confusion caused by local contextual features.</p>
Full article ">Figure 4
<p>Diagram of the proposed GSCA module. Mul, element-wise product; Add, element-wise addition. Pixel marked with blue in the final output of the GSCA module was reweighted by aggregating contextual correlation of all pixels in the same row and column.</p>
Full article ">Figure 5
<p>Architecture of proposed GSCA-UNet model for shadow detection in urban aerial images. The ResNeXt-101encoder path is adopted to encode both coarse and fine dense semantic features of input image. The modified decoder path of UNet is responsible for fusing different-stage features and gradually retrieving the detailed information to output the final shadow predictions.</p>
Full article ">Figure 6
<p>Structure of feature-fusion module. <span class="html-italic">f</span>, <math display="inline"><semantics> <msup> <mi>f</mi> <msup> <mrow/> <mo>′</mo> </msup> </msup> </semantics></math>, and <math display="inline"><semantics> <msup> <mi>f</mi> <mo>″</mo> </msup> </semantics></math> denote imputed feature map, temporary feature map, and final output of recurrent GSCA module, respectively.</p>
Full article ">Figure 7
<p>Representative shadow images and corresponding reference shadow maps. (<b>a</b>–<b>c</b>) Label included two categories: shadow (white) and non-shadow (black). White, pixel value = 1; black, pixel value = 0.</p>
Full article ">Figure 8
<p>Variation curves of loss value and overall accuracy during training. (<b>a</b>) Loss value curves; (<b>b</b>) Overall accuracy curves.</p>
Full article ">Figure 9
<p>Toronto image results of different methods. (<b>a</b>) Original image; (<b>b</b>) reference shadow map; (<b>c</b>) image-matting method (IMM); (<b>d</b>) extended-random-walker (ERW) method; (<b>e</b>) object-oriented (OO) method; (<b>f</b>) BDRAR method; (<b>g</b>) proposed method. Scribbles marked with yellow and blue in panel (<b>c</b>) are required non-shadow and shadow samples, respectively, for performing IMM.</p>
Full article ">Figure 10
<p>Chicago image results of different methods. (<b>a</b>) Original image; (<b>b</b>) Reference shadow map; (<b>c</b>) IMM; (<b>d</b>) ERW method; (<b>e</b>) OO method; (<b>f</b>) BDRAR method; (<b>g</b>) Proposed method. Scribbles marked with yellow and blue in panel (<b>c</b>) are for required non-shadow and shadow samples, respectively, for performing IMM.</p>
Full article ">Figure 11
<p>Vienna image results of different methods. (<b>a</b>) Original image; (<b>b</b>) reference shadow map; (<b>c</b>) IMM method; (<b>d</b>) ERW method; (<b>e</b>) OO method; (<b>f</b>) BDRAR method; (<b>g</b>) proposed method. Scribbles marked with yellow and blue in panel (<b>c</b>) are for required non-shadow and shadow samples, respectively, for performing IMM.</p>
Full article ">Figure 12
<p>Austin image results of different methods. (<b>a</b>) Original image; (<b>b</b>) reference shadow map; (<b>c</b>) IMM; (<b>d</b>) ERW method; (<b>e</b>) OO method; (<b>f</b>) BDRAR method; (<b>g</b>) proposed method. Scribbles marked with yellow and blue in panel (<b>c</b>) stand for required non-shadow and shadow samples, respectively, for performing IMM.</p>
Full article ">Figure 13
<p>Examples of proposed method using global spatial contextual information to refine easily confused regions in the Toronto image. (<b>a</b>) Original image; (<b>b</b>) reference shadow map; (<b>c</b>) IMM; (<b>d</b>) ERW method; (<b>e</b>) OO method; (<b>f</b>) BDRAR method; (<b>g</b>) proposed method. Scribbles marked with yellow and blue in panel (<b>c</b>) stand for required non-shadow and shadow samples, respectively, for performing IMM.</p>
Full article ">
21 pages, 9845 KiB  
Article
Fusarium Wilt of Radish Detection Using RGB and Near Infrared Images from Unmanned Aerial Vehicles
by L. Minh Dang, Hanxiang Wang, Yanfen Li, Kyungbok Min, Jin Tae Kwak, O. New Lee, Hanyong Park and Hyeonjoon Moon
Remote Sens. 2020, 12(17), 2863; https://doi.org/10.3390/rs12172863 - 3 Sep 2020
Cited by 31 | Viewed by 5342
Abstract
The radish is a delicious, healthy vegetable and an important ingredient to many side dishes and main recipes. However, climate change, pollinator decline, and especially Fusarium wilt cause a significant reduction in the cultivation area and the quality of the radish yield. Previous [...] Read more.
The radish is a delicious, healthy vegetable and an important ingredient to many side dishes and main recipes. However, climate change, pollinator decline, and especially Fusarium wilt cause a significant reduction in the cultivation area and the quality of the radish yield. Previous studies on plant disease identification have relied heavily on extracting features manually from images, which is time-consuming and inefficient. In addition to Red-Green-Blue (RGB) images, the development of near-infrared (NIR) sensors has enabled a more effective way to monitor the diseases and evaluate plant health based on multispectral imagery. Thus, this study compares two distinct approaches in detecting radish wilt using RGB images and NIR images taken by unmanned aerial vehicles (UAV). The main research contributions include (1) a high-resolution RGB and NIR radish field dataset captured by drone from low to high altitudes, which can serve several research purposes; (2) implementation of a superpixel segmentation method to segment captured radish field images into separated segments; (3) a customized deep learning-based radish identification framework for the extracted segmented images, which achieved remarkable performance in terms of accuracy and robustness with the highest accuracy of 96%; (4) the proposal for a disease severity analysis that can detect different stages of the wilt disease; (5) showing that the approach based on NIR images is more straightforward and effective in detecting wilt disease than the learning approach based on the RGB dataset. Full article
(This article belongs to the Section AI Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample images captured by unmanned aerial vehicle (UAV) at three different altitudes (3 m, 7 m, and 15 m). The top three images belong to the near-infrared (NIR) dataset, whereas the bottom three images are from the Red-Green-Blue (RGB) dataset.</p>
Full article ">Figure 2
<p>Satellite map of the radish fields (33°30′5.44″ N, 126°51′25.39″ E), where the radish dataset was collected (Google map image).</p>
Full article ">Figure 3
<p>An RGB radish sample captured by UAV illustrates three distinctive types of objects (the green line shows radish regions, the blue line indicates soil, and the black line indicates plastic mulch).</p>
Full article ">Figure 4
<p>Overall structure of the proposed wilt detection framework. Two UAV were used to capture the RGB and the NIR images at the 3-m altitude. After that, two distinctive sub-processes were implemented to perform radish wilt detection on the RGB and the NIR dataset.</p>
Full article ">Figure 5
<p>The stitching result for 13 RGB images at an altitude of 3 meters (the mosaic image size is 8404 × 3567).</p>
Full article ">Figure 6
<p>Superpixel segmentation with a different number of superpixels. We zoomed in a specific wilt region (<b>b</b>) on the original image (<b>a</b>). After that, the superpixel segmentation was applied to the zoom image with a different number of superpixels, which include (<b>c1</b>) 500 superpixels, (<b>c2</b>) 1000 superpixels, and (<b>c3</b>) 2000 superpixels.</p>
Full article ">Figure 7
<p>RadRGB architecture that involves the input images of 64 × 64, a detailed configuration of five convolutional layers, three max-pooling layers, and three output classes (C indicates the convolutional layer and M represents the max-pooling layer).</p>
Full article ">Figure 8
<p>Disease severity classification process (scaled up to 100% from the original image). Each of the 64x64 radish images (first column) were converted to hue, saturation, lightness (HSV) colorspace (second column), and then the thresholding process (third column) was conducted to categorize the disease severity into healthy (first row), light (second row), or serious (third row).</p>
Full article ">Figure 9
<p>Performance of the RadRGB model on each fold.</p>
Full article ">Figure 10
<p>Example of the radish field classification and disease severity analysis on the RGB image, including (<b>a</b>) extracted radish region from the RadRGB model, (<b>b</b>) wilt severity analysis with the blue box indicate some examples of the wilt region, and (<b>c</b>) comparison between detected wilt regions and original images (scaled up to 150% from (<b>b</b>)).</p>
Full article ">Figure 11
<p>Wilt disease severity analysis for extracted radish regions (scaled up to 100% from the original image). (<b>a</b>) Healthy radish regions, (<b>b</b>) light disease regions, and (<b>c</b>) serious disease regions.</p>
Full article ">Figure 12
<p>The distinction between three corresponding normalized difference vegetation index (NDVI) maps for radish at different stages (scaled up to 100% from the original image), including (<b>a</b>) healthy, (<b>b</b>) light disease, and (<b>c</b>) serious disease.</p>
Full article ">Figure 13
<p>Boxplot showing the correlation between NDVI values and Fusarium wilt disease on 12 January 2018 and 28 January 2018. The graph shows a negative trend in NDVI, with higher values in healthy radish and lower values in radish infected with wilt disease, with lowest NDVI values recorded for the radishes at the late stage of wilt disease.</p>
Full article ">
2 pages, 512 KiB  
Correction
Correction: Palombo, A. and Santini, F. ImaAtCor: A Physically Based Tool for Combined Atmospheric and Topographic Corrections of Remote Sensing Images. Remote Sensing 2020, 12, 2076
by Angelo Palombo and Federico Santini
Remote Sens. 2020, 12(17), 2862; https://doi.org/10.3390/rs12172862 - 3 Sep 2020
Viewed by 2149
Abstract
The authors wish to make the following corrections to this paper [...] Full article
15 pages, 7107 KiB  
Article
NOAA Satellite Soil Moisture Operational Product System (SMOPS) Version 3.0 Generates Higher Accuracy Blended Satellite Soil Moisture
by Jifu Yin, Xiwu Zhan and Jicheng Liu
Remote Sens. 2020, 12(17), 2861; https://doi.org/10.3390/rs12172861 - 3 Sep 2020
Cited by 17 | Viewed by 3835
Abstract
Soil moisture plays a vital role for the understanding of hydrological, meteorological, and climatological land surface processes. To meet the need of real time global soil moisture datasets, a Soil Moisture Operational Product System (SMOPS) has been developed at National Oceanic and Atmospheric [...] Read more.
Soil moisture plays a vital role for the understanding of hydrological, meteorological, and climatological land surface processes. To meet the need of real time global soil moisture datasets, a Soil Moisture Operational Product System (SMOPS) has been developed at National Oceanic and Atmospheric Administration to produce a one-stop shop for soil moisture observations from all available satellite sensors. What makes the SMOPS unique is its near real time global blended soil moisture product. Since the first version SMOPS publicly released in 2010, the SMOPS has been updated twice based on the users’ feedbacks through improving retrieval algorithms and including observations from new satellite sensors. The version 3.0 SMOPS has been operationally released since 2017. Significant differences in climatological averages lead to remarkable distinctions in data quality between the newest and the older versions of SMOPS blended soil moisture products. This study reveals that the SMOPS version 3.0 has overwhelming advantages of reduced data uncertainties and increased correlations with respect to the quality controlled in situ measurements. The new version SMOPS also presents more robust agreements with the European Space Agency’s Climate Change Initiative (ESA_CCI) soil moisture datasets. With the higher accuracy, the blended data product from the new version SMOPS is expected to benefit the hydrological, meteorological, and climatological researches, as well as numerical weather, climate, and water prediction operations. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Average soil moisture (in m<sup>3</sup>/m<sup>3</sup> unit) for (<b>a</b>) SMOPS version 1.0 over the 1 June 2007–3 November 2011 period, (<b>b</b>) SMOPS version 2.0 over the 16 November 2011–20 September 2016 period, (<b>c</b>) SMOPS version 3.0 over the 1 April 2015–31 December 2019 period, as well as (<b>d</b>) global domain-averaged frequency probability as a function of average soil moisture for the three version SMOPS blended soil moisture data products during the corresponding product time periods.</p>
Full article ">Figure 2
<p>Processing flow of the daily SMOPS blended soil moisture data product. The abbreviations SM and Tb indicate soil moisture and brightness temperature.</p>
Full article ">Figure 3
<p>With respect to the quality controlled soil climate analysis network (SCAN) observations, correlation coefficients (<span class="html-italic">r</span>) for (<b>a</b>) SMOPS version 1.0 over the 1 June 2007–3 November 2011 period, (<b>b</b>) SMOPS version 2.0 over the 16 November 2011–20 September 2016 period, (<b>c</b>) SMOPS version 3.0 over the 1 April 2015–31 December 2019 period, as well as (<b>d</b>) contiguous United States (CONUS) domain-averaged frequency probability as a function of correlation coefficients for the three versions SMOPS blended soil moisture data products during the corresponding product time periods with curves shifting toward the right (left) indicating stronger (weaker) correlations.</p>
Full article ">Figure 4
<p>With respect to the quality controlled SCAN observations, root mean square errors (RMSEs in m<sup>3</sup>/m<sup>3</sup> unit) for (<b>a</b>) SMOPS version 1.0 over the 1 June 2007–3 November 2011 period, (<b>b</b>) SMOPS version 2.0 over the 16 November 2011–20 September 2016 period, (<b>c</b>) SMOPS version 3.0 over the 1 April 2015–31 December 2019 period, as well as (<b>d</b>) CONUS domain-averaged frequency probability as a function of RMSE for the three versions SMOPS blended soil moisture data products during the corresponding product time periods with curves shifting the toward left (right) indicating lower (greater) errors.</p>
Full article ">Figure 5
<p>With respect to the quality controlled SCAN observations, unbiased root mean square errors (<span class="html-italic">ubRMSE</span>s in m<sup>3</sup>/m<sup>3</sup> unit) for (<b>a</b>) SMOPS version 1.0 over the 1 June 2007–3 November 2011 period, (<b>b</b>) SMOPS version 2.0 over the 16 November 2011–20 September 2016 period, (<b>c</b>) SMOPS version 3.0 over the 1 April 2015–31 December 2019 period, as well as (<b>d</b>) CONUS domain-averaged frequency probability as a function of RMSE for the three versions SMOPS blended soil moisture data products during the corresponding product time periods with curves shifting toward the left (right) indicating lower (greater) errors.</p>
Full article ">Figure 6
<p>With respect to the European Space Agency’s Climate Change Initiative (ESA_CCI) observations, root mean square differences (RMSDs in m<sup>3</sup>/m<sup>3</sup> unit) for (<b>a</b>) SMOPS version 1.0 over the 1 June 2007–3 November 2011 period, (<b>b</b>) SMOPS version 2.0 over the 16 November 2011–20 September 2016 period, (<b>c</b>) SMOPS version 3.0 over the 1 April 2015–31 December 2019 period, as well as (<b>d</b>) global domain-averaged frequency probability as a function of RMSD for the three versions SMOPS blended soil moisture data products during the corresponding product time periods with curves shifting toward the left (right) indicating lower (greater) errors.</p>
Full article ">Figure 7
<p>With respect to the quality controlled SCAN observations, differences in RMSEs (m<sup>3</sup>/m<sup>3</sup>) between SMOPS version 3.0 and ESA_CCI version 4.5 over the 1 April 2015–31 December 2018 period. Sites in blue (red) color indicate that SMOPS is better (worse) than ESA_CCI.</p>
Full article ">
23 pages, 9953 KiB  
Article
Impact of Three Gorges Reservoir Water Impoundment on Vegetation–Climate Response Relationship
by Mengqi Tian, Jianzhong Zhou, Benjun Jia, Sijing Lou and Huiling Wu
Remote Sens. 2020, 12(17), 2860; https://doi.org/10.3390/rs12172860 - 3 Sep 2020
Cited by 18 | Viewed by 3574
Abstract
In recent years, the impact of global climate change and human activities on vegetation has become increasingly prominent. Understanding vegetation change and its response to climate variables and human activities are key tasks in predicting future environmental changes, climate changes and ecosystem evolution. [...] Read more.
In recent years, the impact of global climate change and human activities on vegetation has become increasingly prominent. Understanding vegetation change and its response to climate variables and human activities are key tasks in predicting future environmental changes, climate changes and ecosystem evolution. This paper aims to explore the impact of Three Gorges Reservoir (TGR) water impoundment on the vegetation–climate response relationship in the Three Gorges Reservoir Region (TGRR) and its surrounding region. Firstly, based on the SPOT/VEGETATION NDVI and ERA5 reanalysis datasets, the correlation between climatic factors (temperature and precipitation) and NDVI was analyzed by using partial correlation coefficient method. Secondly, nonlinear fitting method was used to fit the mapping relationship between NDVI and climatic factors. Then, the residual analysis was conducted to evaluate the impact of TGR impoundment on vegetation–climate response relationship. Finally, sensitivity index (SI), sensitivity variation index (SVI) and difference index (DI) were defined to quantify the variation of vegetation–climate response relationship before and after water impoundment. The results show that water impoundment might have some impacts on the response of vegetation–climate, which gradually reduced with increasing distance from the channel; comparing with the residual analysis method, the SI and DI index methods are more intuitive, and combining these two methods may provide new ideas for the study of the impact of human activities on vegetation. Full article
(This article belongs to the Section Remote Sensing in Agriculture and Vegetation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic location and elevation of the study area.</p>
Full article ">Figure 2
<p>Trends of annual maximum NDVI from 1998 to 2018.</p>
Full article ">Figure 3
<p>Land use/land cover changes in the study area: (<b>a</b>) land use/land cover in the study area in 2000; and (<b>b</b>) land use/land cover in the study area in 2018.</p>
Full article ">Figure 4
<p>Areal mean change of annual NDVI from 1998 to 2018.</p>
Full article ">Figure 5
<p>Partial correlation coefficient of NDVI–precipitation and NDVI–temperature (Total series). (<b>a</b>) partial correlation coefficient of NDVI–temperature; and (<b>b</b>) partial correlation coefficient of NDVI–precipitation.</p>
Full article ">Figure 6
<p>Partial correlation coefficient of NDVI–temperature in four seasons: (<b>a</b>) spring; (<b>b</b>) summer; (<b>c</b>) autumn; and (<b>d</b>) winter.</p>
Full article ">Figure 7
<p>Partial correlation coefficient of NDVI–precipitation in four seasons: (<b>a</b>) spring; (<b>b</b>) summer; (<b>c</b>) autumn; and (<b>d</b>) winter.</p>
Full article ">Figure 8
<p>NDVI residuals and residual trend after water impoundment.</p>
Full article ">Figure 9
<p>Observed and predicted value of NDVI after impoundment.</p>
Full article ">Figure 10
<p>The sensitivity index (SI) and sensitivity variation index (SVI) for the NDVI–temperature response in 0–10 km zone (PRE = 100 mm).</p>
Full article ">Figure 11
<p>The sensitivity index (SI) and sensitivity variation index (SVI) for the NDVI–temperature response in 0–10 km zone (PRE = 200 mm).</p>
Full article ">Figure 12
<p>The sensitivity index (SI) and sensitivity variation index (SVI) for the NDVI–temperature response in 0–10 km zone (PRE = 300 mm).</p>
Full article ">Figure 13
<p>The sensitivity index (SI) and sensitivity variation index (SVI) for the NDVI–precipitation response in 0–10 km zones.</p>
Full article ">Figure 14
<p>The difference index (DI) describing the change of NDVI–climate response in different zones.</p>
Full article ">
17 pages, 5573 KiB  
Article
Neural Network Based Quality Control of CYGNSS Wind Retrieval
by Rajeswari Balasubramaniam and Christopher Ruf
Remote Sens. 2020, 12(17), 2859; https://doi.org/10.3390/rs12172859 - 3 Sep 2020
Cited by 16 | Viewed by 3692
Abstract
Global Navigation Satellite System – Reflectometry (GNSS-R) is a relatively new field in remote sensing that uses reflected GPS signals from the Earth’s surface to study the state of the surface geophysical parameters under observation. The CYGNSS is a first of its kind [...] Read more.
Global Navigation Satellite System – Reflectometry (GNSS-R) is a relatively new field in remote sensing that uses reflected GPS signals from the Earth’s surface to study the state of the surface geophysical parameters under observation. The CYGNSS is a first of its kind GNSS-R constellation mission launched in December 2016. It aims at providing high quality global scale GNSS-R measurements that can reliably be used for ocean science applications such as the study of ocean wind speed dynamics, tropical cyclone genesis, coupled ocean wave modelling, and assimilation into Numerical Weather Prediction models. To achieve this goal, strong quality control filters are needed to detect and remove outlier measurements. Currently, quality control of CYGNSS data products are based on fixed thresholds on various engineering, instrument, and measurement conditions. In this work we develop a Neural Network based quality control filter for automated outlier detection of CYGNSS retrieved winds. The primary merit of the proposed ML filter is its ability to better account for interactions between the individual engineering, instrument and measurement conditions than can separate thresholded flags for each one. Use of Machine Learning capabilities to capture inherent patterns in the data can create an efficient and effective mechanism to detect and remove outlier measurements. The resulting filter has a probability of outlier detection (PD) >75% and False Alarm Rate (FAR) < 20% for a wind speed range of 5 to 18 m/s. At least 75% of the outliers with wind speed errors of at least 5 m/s are removed while ~100% of the outliers with wind speed errors of at least 10 m/s are removed. This filter significantly improves data quality. The standard deviation of wind speed retrieval error is reduced from 2.6 m/s without the filter to 1.7 m/s with it over a wind speed range of 0 to 25 m/s. The design space for this filter is also analyzed in this work to characterize trade-offs between PD and FAR. Currently the filter performance is applicable only up to moderate wind speeds, as sufficient data is available only in this range to train the filter, as a way forward, more data over time can help expand the usability of this filter to higher wind speed ranges as well. Full article
(This article belongs to the Special Issue Applications of GNSS Reflectometry for Earth Observation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Log density plot of CYGNSS Level 2 retrieved winds matched to MERRA-2 reference winds for the wind speed range 0–25 m/s. The dashed line represents 1:1 agreement between the two winds. The solid line is the average CYGNSS retrieved wind at each MERRA-2 wind speed. A clustering of outliers can be seen near MERRA-2 wind speeds of 0-10 m/s and CYGNSS wind speeds &gt;15 m/s (shown with a red box). One primary objective of the new filter is removal of this cluster.</p>
Full article ">Figure 2
<p>Density plot of CYGNSS Level 2 retrieved winds matched to MERRA-2 reference winds used for training. Top row represents the good (left) and outlier (right) training population for (0 &lt; <math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mrow> <mi>C</mi> <mi>Y</mi> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math> &lt;=10 m/s). The bottom row represents the good (left) and outlier (right) training population for (<math display="inline"><semantics> <mrow> <msub> <mi>U</mi> <mrow> <mi>C</mi> <mi>Y</mi> <mi>G</mi> </mrow> </msub> </mrow> </semantics></math>&gt;10 m/s).</p>
Full article ">Figure 3
<p>Steps involved in the new quality control algorithm for CYGNSS data. The algorithm has 3 major stages—Feature Extraction, ML training and Validation/testing.</p>
Full article ">Figure 4
<p>PD and FAR curves for 3 different network sizes (5, 10 and 15).</p>
Full article ">Figure 5
<p>Family of PD and FAR curves for different definitions of good and outlier samples. The dark-light blue curves represent PD and the orange-red curves represent FAR.</p>
Full article ">Figure 6
<p>CYGNSS retrieved wind dataset after quality control. The outliers in the red box have been mostly eliminated here relative to <a href="#remotesensing-12-02859-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 7
<p>PD and FAR metrics for the CYGNSS test dataset.</p>
Full article ">Figure 8
<p>Ratio of outliers rightly identified by the filter to the actual no. of outliers vs. wind speed difference.</p>
Full article ">Figure 9
<p>Distribution of outliers at different MERRA-2 wind speed bins before and after QC filter.</p>
Full article ">Figure 10
<p>Distribution of CYGNSS retrieved winds before and after QC filter.</p>
Full article ">Figure 11
<p>Dominant diagnostic variables in identifying outliers. Variable definitions are provided in <a href="#remotesensing-12-02859-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 12
<p>Mean difference and RMS difference statistic on CYGNSS retrieved winds before and after QC filter.</p>
Full article ">Figure 13
<p>Variance in CYGNSS retrieved winds before and after QC filter.</p>
Full article ">
12 pages, 19485 KiB  
Technical Note
Validation and Calibration of Significant Wave Height and Wind Speed Retrievals from HY2B Altimeter Based on Deep Learning
by Jiuke Wang, Lotfi Aouf, Yongjun Jia and Youguang Zhang
Remote Sens. 2020, 12(17), 2858; https://doi.org/10.3390/rs12172858 - 3 Sep 2020
Cited by 32 | Viewed by 4571
Abstract
HY2B is now the latest altimetry mission that provides global nadir significant wave height (SWH) and sea surface wind speed. The validation and calibration of HY2B are carried out against National Data Buoy Center (NDBC) buoy observations from April 2019 to April 2020. [...] Read more.
HY2B is now the latest altimetry mission that provides global nadir significant wave height (SWH) and sea surface wind speed. The validation and calibration of HY2B are carried out against National Data Buoy Center (NDBC) buoy observations from April 2019 to April 2020. In general, the HY2B altimeter measurements agree well with buoy observation, with scatter index of 9.4% for SWH, and 15.1% for wind speed. However, we observed a significant bias of 0.14 m for SWH and −0.42 m/s for wind speed. A deep learning technique is novelly applied for the calibration of HY2B SWH and wind speed. Deep neural network (DNN) is built and trained to correct SWH and wind speed by using input from parameters provided by the altimeter such as sigma0, sigma0 standard deviation (STD). The results based on DNN show a significant reduction of the bias, root mean square error (RMSE), and scatter index (SI) for both SWH and wind speed. Several DNN schemes based on different combination of input parameters have been examined in order to obtain the best model for the calibration. The analysis reveals that sigma0 STD is a key parameter for the calibration of HY2B SWH and wind speed. Full article
(This article belongs to the Special Issue Calibration and Validation of Satellite Altimetry)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The locations of National Data Buoy Center (NDBC) buoys used in the paper.</p>
Full article ">Figure 2
<p>The validation of HY2B significant wave height (SWH) against NDBC buoys. (<b>a</b>) Indicates the scatters and the statistical parameters; (<b>b</b>) indicates the bias and RMSE distribution with the SWH value; (<b>c</b>) indicates the scatter index (SI) and normalized root mean square error (NRMSE) distribution with the SWH value.</p>
Full article ">Figure 3
<p>The validation of HY2B wind speed against NDBC buoys. (<b>a</b>) Indicates the scatters and the statistical parameters; (<b>b</b>) indicates the bias and RMSE distribution with the wind speed;(<b>c</b>) indicates the SI and NRMSE distribution with the wind speed.</p>
Full article ">Figure 4
<p>The basic calculation principle of one neuron. X<sub>1</sub>, X<sub>2</sub>, and X<sub>3</sub> are the input of the neuron; W<sub>1</sub>, W<sub>2</sub>, and W<sub>3</sub> are the weight of each input X; the activation function of the neuron is σ(X); Y is the output of the neuron.</p>
Full article ">Figure 5
<p>The structure of deep neural network for SWH calibration scheme 3.</p>
Full article ">Figure 6
<p>The percentage of the reduction of error by each SWH calibration DNN scheme.</p>
Full article ">Figure 7
<p>Detailed comparison between the DNN scheme 3 and original HY2B SWH. (<b>a</b>) Shows scatter plots of SWH from buoys and HY2B. Red dots and blue circles stand for original HY2B and DNN calibration, respectively; (<b>b</b>) shows the bias and RMSE variation with SWH, Bd, and Bo indicate the bias of DNN calibration and original HY2B, respectively, Rd and Ro represent the RMSE of DNN and original. While (<b>c</b>) shows the SI and NRMSE variation with SWH, SId and SIo stand for SI of DNN and original; NRd and NRo stand for NRMSE of DNN and original; (<b>d</b>) shows the percentage of improvement by the DNN calibration for bias, RMSE, and SI.</p>
Full article ">Figure 8
<p>The structure of deep neural network for wind speed calibration scheme 6.</p>
Full article ">Figure 9
<p>The percentage of the reduction of error by each wind speed calibration DNN scheme.</p>
Full article ">Figure 10
<p>Detailed comparison between the DNN scheme 6 and original HY2B wind speed. (<b>a</b>) Shows the scatter plots of original HY2B wind speed (red dots) and DNN-calibrated wind speed (blue cycles); (<b>b</b>) shows the bias and RMSE distribution with the wind speed. Bd and Bo represent the bias of DNN calibration and original HY2B, respectively. Rd and Ro stand forRMSE of DNN and original; (<b>c</b>) shows the SI and NRMSE variation with wind speed, SId and SIo represent the SI of DNN and original while NRd and NRo represent the NRMSE of DNN and original. (<b>d</b>) shows the percentage of improvement by the DNN calibration for bias, RMSE and SI.</p>
Full article ">
15 pages, 2668 KiB  
Article
Estimation of Winter Wheat Production Potential Based on Remotely-Sensed Imagery and Process-Based Model Simulations
by Tingting Lang, Yanzhao Yang, Kun Jia, Chao Zhang, Zhen You and Yubin Liang
Remote Sens. 2020, 12(17), 2857; https://doi.org/10.3390/rs12172857 - 3 Sep 2020
Cited by 4 | Viewed by 3239
Abstract
Crop production potential is an index used to evaluate crop productivity capacity in one region. The spatial production potential can help give the maximum value of crop yield and visually clarify the prospects of agricultural development. The DSSAT (Decision Support System for Agrotechnology [...] Read more.
Crop production potential is an index used to evaluate crop productivity capacity in one region. The spatial production potential can help give the maximum value of crop yield and visually clarify the prospects of agricultural development. The DSSAT (Decision Support System for Agrotechnology Transfer) model has been used in crop growth analysis, but spatial simulation and analysis at high resolution have not been widely performed for exact crop planting locations. In this study, the light-temperature production potential of winter wheat was simulated with the DSSAT model in the winter wheat planting area, extracted according to Remote Sensing (RS) image data in the Jing-Jin-Ji (JJJ) region. To obtain the precise study area, a Decision Tree (DT) classification was used to extract the winter wheat planting area. Geographic Information System (GIS) technology was used to process spatial data and provide a map of the spatial distribution of the production potential. The production potential of winter wheat was estimated in batches with the DSSAT model. The results showed that the light-temperature production potential is between 4238 and 10,774 kg/ha in JJJ. The production potential in the central part of the planting area is higher than that in the south and north in JJJ due to the influences of light and temperature. These results can be useful for crop model simulation users and decision makers in JJJ. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and the crop land distribution.</p>
Full article ">Figure 2
<p>Technical flowchart.</p>
Full article ">Figure 3
<p>The Normalized Difference Vegetation Index (NDVI) time series curves of sample points (<b>a</b>) represents the winter wheat; (<b>b</b>) represents the forest; (<b>c</b>) represents buildings.</p>
Full article ">Figure 4
<p>The spatial distribution of the extracted planting location and the field sampling points of winter wheat in the Jing-Jin-Ji (JJJ) region.</p>
Full article ">Figure 5
<p>The distribution of the production potential of winter wheat in JJJ.</p>
Full article ">Figure 6
<p>Temperature and total solar radiation distribution during the winter wheat growth period in JJJ.</p>
Full article ">
17 pages, 5078 KiB  
Article
Instrument Development: Chinese Radiometric Benchmark of Reflected Solar Band Based on Space Cryogenic Absolute Radiometer
by Xin Ye, Xiaolong Yi, Chao Lin, Wei Fang, Kai Wang, Zhiwei Xia, Zhenhua Ji, Yuquan Zheng, De Sun and Jia Quan
Remote Sens. 2020, 12(17), 2856; https://doi.org/10.3390/rs12172856 - 3 Sep 2020
Cited by 29 | Viewed by 3159
Abstract
Low uncertainty and long-term stability remote data are urgently needed for researching climate and meteorology variability and trends. Meeting these requirements is difficult with in-orbit calibration accuracy due to the lack of radiometric satellite benchmark. The radiometric benchmark on the reflected solar band [...] Read more.
Low uncertainty and long-term stability remote data are urgently needed for researching climate and meteorology variability and trends. Meeting these requirements is difficult with in-orbit calibration accuracy due to the lack of radiometric satellite benchmark. The radiometric benchmark on the reflected solar band has been under development since 2015 to overcome the on-board traceability problem of hyperspectral remote sensing satellites. This paper introduces the development progress of the Chinese radiometric benchmark of the reflected solar band based on the Space Cryogenic Absolute Radiometer (SCAR). The goal of the SCAR is to calibrate the Earth–Moon Imaging Spectrometer (EMIS) on-satellite using the benchmark transfer chain (BTC) and to transfer the traceable radiometric scale to other remote sensors via cross-calibration. The SCAR, which is an electrical substitution absolute radiometer and works at 20 K, is used to realize highly accurate radiometry with an uncertainty level that is lower than 0.03%. The EMIS, which is used to measure the spectrum radiance on the reflected solar band, is designed to optimize the signal-to-noise ratio and polarization. The radiometric scale of the SCAR is converted and transferred to the EMIS by the BTC to improve the measurement accuracy and long-term stability. The payload of the radiometric benchmark on the reflected solar band has been under development since 2018. The investigation results provide the theoretical and experimental basis for the development of the reflected solar band benchmark payload. It is important to improve the measurement accuracy and long-term stability of space remote sensing and provide key data for climate change and earth radiation studies. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The working model of the radiometric benchmark satellite: (<b>a</b>) self-calibration, (<b>b</b>) total solar irradiance calibration, (<b>c</b>) uniform field calibration, and (<b>d</b>) lunar calibration.</p>
Full article ">Figure 2
<p>The principle of the radiometric benchmark on the reflected solar band.</p>
Full article ">Figure 3
<p>Optical design of the Earth–Moon Imaging Spectrometer (EMIS).</p>
Full article ">Figure 4
<p>Principle of the benchmark transfer chain (BTC).</p>
Full article ">Figure 5
<p>The Space Cryogenic Absolute Radiometer (SCAR) prototype. SPTC: Stryn-type Pulse Tube Cryocooler.</p>
Full article ">Figure 6
<p>Thermoelectric repeatability of blackbody cavity.</p>
Full article ">Figure 7
<p>Spectral radiance attenuation of a tungsten halogen lamp.</p>
Full article ">Figure 8
<p>Spectral radiance inversion of the tungsten halogen lamp.</p>
Full article ">Figure 9
<p>Signal-to-noise ratio (SNR) simulation results: (<b>a</b>) pupil radiance of Visible Near Infrared Radiance spectrometer (VNIR), (<b>b</b>) pupil radiance of the Short Wave Infrared Radiance spectrometer (SWIR), (<b>c</b>) signal electric number of VNIR, (<b>d</b>) signal electric number of SWIR, (<b>e</b>) VNIR spectrometer SNR, and (<b>f</b>) SWIR spectrometer SNR.</p>
Full article ">
18 pages, 8817 KiB  
Article
Radiometric Calibration for Incidence Angle, Range and Sub-Footprint Effects on Hyperspectral LiDAR Backscatter Intensity
by Changsai Zhang, Shuai Gao, Wang Li, Kaiyi Bi, Ni Huang, Zheng Niu and Gang Sun
Remote Sens. 2020, 12(17), 2855; https://doi.org/10.3390/rs12172855 - 2 Sep 2020
Cited by 16 | Viewed by 3928
Abstract
Terrestrial hyperspectral LiDAR (HSL) sensors could provide not only spatial information of the measured targets but also the backscattered spectral intensity signal of the laser pulse. The raw intensity collected by HSL is influenced by several factors, among which the range, incidence angle [...] Read more.
Terrestrial hyperspectral LiDAR (HSL) sensors could provide not only spatial information of the measured targets but also the backscattered spectral intensity signal of the laser pulse. The raw intensity collected by HSL is influenced by several factors, among which the range, incidence angle and sub-footprint play a significant role. Further studies on the influence of the range, incidence angle and sub-footprint are needed to improve the accuracy of backscatter intensity data as it is important for vegetation structural and biochemical information estimation. In this paper, we investigated the effects on the laser backscatter intensity and developed a practical correction method for HSL data. We established a laser ratio calibration method and a reference target-based method for HSL and investigated the calibration procedures for the mixed measurements of the effects of the incident angle, range and sub-footprint. Results showed that the laser ratio at the red-edge and near-infrared laser wavelengths has higher accuracy and simplicity in eliminating range, incident angle and sub-footprint effects and can significantly improve the backscatter intensity discrepancy caused by these effects. Full article
(This article belongs to the Special Issue Advances in LiDAR Remote Sensing for Forestry and Ecology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of the sub-footprint effect. The laser beam is emitted by a supercontinuum laser with a 3 mrad divergence angle.</p>
Full article ">Figure 2
<p>Measurement setup of experiment 1 with a fixed distance of 6 m is scanned at varying incidence angles. A representative laser beam is depicted in red.</p>
Full article ">Figure 3
<p>Measurement setup of experiment 2 with distance from 4 to 20 m, and a step interval of 1 m.</p>
Full article ">Figure 4
<p>Backscatter intensities of twenty wavelengths of the standard reflectance panels versus incidence angle. (<b>a</b>) Raw intensities of the 99% standard reflectance panel; (<b>b</b>) raw intensities of the 50% standard reflectance panel.</p>
Full article ">Figure 5
<p>Backscatter intensities of twenty wavelengths of the six leaf targets versus incidence angle: (<b>a</b>) <span class="html-italic">Ficus elastica</span>, (<b>b</b>) <span class="html-italic">Epipremnum aureum</span>, (<b>c</b>) <span class="html-italic">Aglaonema modestum</span>, (<b>d</b>) <span class="html-italic">Hibiscus rosa-sinensis Linn</span>, (<b>e</b>) <span class="html-italic">Spathiphyllum kochii</span> and (<b>f</b>) <span class="html-italic">Kalanchoe blossfeldiana Poelln</span>.</p>
Full article ">Figure 6
<p>Backscattered intensities of the twenty wavelengths are normalized to 1 at 0°. (<b>a</b>) Normalized intensities of the 99% standard reflectance panel; (<b>b</b>) normalized intensities of a leaf sample. Lambertian cosine law is depicted in black.</p>
Full article ">Figure 7
<p>Relative correction results of all wavelengths for the six leaf targets versus incidence angle, the 99% standard reflectance panel is used as a reference. (<b>a</b>) <span class="html-italic">Ficus elastica</span>, (<b>b</b>) <span class="html-italic">Epipremnum aureum</span>, (<b>c</b>) <span class="html-italic">Aglaonema modestum</span>, (<b>d</b>) <span class="html-italic">Hibiscus rosa-sinensis Linn</span>, (<b>e</b>) <span class="html-italic">Spathiphyllum kochii</span> and (<b>f</b>) <span class="html-italic">Kalanchoe blossfeldiana Poelln</span>.</p>
Full article ">Figure 8
<p>Backscatter intensities of twenty wavelengths versus distance. (<b>a</b>) Raw intensities of the 99% standard reflectance panel versus distance. (<b>b</b>) Raw intensities of 50% standard reflectance panel versus distance. (<b>c</b>) Raw intensities of a leaf sample versus distance.</p>
Full article ">Figure 9
<p>Backscattered intensities of the four laser wavelengths for the standard panels and leaf sample are normalized to 1 at 8 m.</p>
Full article ">Figure 10
<p>Relative correction results of all wavelengths for (<b>a</b>) the 50% standard reflectance panel and (<b>b</b>) leaf target versus distance, the 99% standard reflectance panel is used as a reference.</p>
Full article ">Figure 11
<p>Laser ratio correction results for the leaf targets versus incidence angle. (<b>a</b>) Ficus elastica, (<b>b</b>) Epipremnum aureum, (<b>c</b>) Aglaonema modestum, (<b>d</b>) Hibiscus rosa-sinensis Linn, (<b>e</b>) Spathiphyllum kochii and (<b>f</b>) Kalanchoe blossfeldiana Poelln.</p>
Full article ">Figure 12
<p>Laser ratio correction results for the leaf sample versus distance.</p>
Full article ">Figure 13
<p>The RMSE deviation results of the laser ratio indices of the leaf samples.</p>
Full article ">Figure 14
<p>Hyperspectral LiDAR backscatter intensity correction visualization. (<b>a</b>) <span class="html-italic">Kniphofia</span>; (<b>b</b>) before correction (near-infrared wavelength intensity-784 nm); (<b>c</b>) after correction (NDLI<sub>nr</sub>); (<b>d</b>) after correction (SLR<sub>ne</sub>).</p>
Full article ">Figure 15
<p>Spectral separability of leaf returns and mixed-edge returns associated with the red-edge (719 nm) and near-infrared (784 nm) laser backscatter intensities; sub-footprint effect adjusted with the normalized difference laser index (NDLI<sub>nr</sub>) and sample laser ratio (SLR<sub>ne</sub>).</p>
Full article ">
24 pages, 10327 KiB  
Article
Evaluating the Spectral Indices Efficiency to Quantify Daytime Surface Anthropogenic Heat Island Intensity: An Intercontinental Methodology
by Mohammad Karimi Firozjaei, Solmaz Fathololoumi, Naeim Mijani, Majid Kiavarz, Salman Qureshi, Mehdi Homaee and Seyed Kazem Alavipanah
Remote Sens. 2020, 12(17), 2854; https://doi.org/10.3390/rs12172854 - 2 Sep 2020
Cited by 20 | Viewed by 3312
Abstract
The surface anthropogenic heat island (SAHI) phenomenon is one of the most important environmental concerns in urban areas. SAHIs play a significant role in quality of urban life. Hence, the quantification of SAHI intensity (SAHII) is of great importance. The impervious surface cover [...] Read more.
The surface anthropogenic heat island (SAHI) phenomenon is one of the most important environmental concerns in urban areas. SAHIs play a significant role in quality of urban life. Hence, the quantification of SAHI intensity (SAHII) is of great importance. The impervious surface cover (ISC) can well reflect the degree and extent of anthropogenic activities in an area. Various actual ISC (AISC) datasets are available for different regions of the world. However, the temporal and spatial coverage of available and accessible AISC datasets is limited. This study was aimed to evaluate the spectral indices efficiency to daytime SAHII (DSAHII) quantification. Consequently, 14 cities including Budapest, Bucharest, Ciechanow, Hamburg, Lyon, Madrid, Porto, and Rome in Europe and Dallas, Seattle, Minneapolis, Los Angeles, Chicago, and Phoenix in the USA, were selected. A set of 91 Landsat 8 images, the Landsat provisional surface temperature product, the High Resolution Imperviousness Layer (HRIL), and the National Land Cover Database (NLCD) imperviousness data were used as the AISC datasets for the selected cities. The spectral index-based ISC (SIISC) and land surface temperature (LST) were modelled from the Landsat 8 images. Then, a linear least square model (LLSM) obtained from the LST-AISC feature space was applied to quantify the actual SAHII of the selected cities. Finally, the SAHII of the selected cities was modelled based on the LST-SIISC feature space-derived LLSM. Finally, the values of the coefficient of determination (R2) and the root mean square error (RMSE) between the actual and modelled SAHII were calculated to evaluate and compare the performance of different spectral indices in SAHII quantification. The performance of the spectral indices used in the built LST-SIISC feature space for SAHII quantification differed. The index-based built-up index (IBI) (R2 = 0.98, RMSE = 0.34 °C) and albedo (0.76, 1.39 °C) performed the best and worst performance in SAHII quantification, respectively. Our results indicate that the LST-SIISC feature space is very useful and effective for SAHII quantification. The advantages of the spectral indices used in SAHII quantification include (1) synchronization with the recording of thermal data, (2) simplicity, (3) low cost, (4) accessibility under different spatial and temporal conditions, and (5) scalability. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographical location and colour-composite images (Blue, near-infrared (NIR), and Short-wave infrared 2 (SWIR2) bands) of selected cities in the USA and Europe.</p>
Full article ">Figure 2
<p>Flowchart of the study.</p>
Full article ">Figure 3
<p>Conceptual diagram of modelling the daytime surface anthropogenic heat island intensity (DSAHII) based on the land surface temperature (LST)-actual impervious surface cover (AISC) feature space. The LSTu is the urban LST, representing the LST in the urban area where the rescaled AISC is 1 (AISC is 100%), and LSTr is the rural LST, representing the LST in the rural area where the rescaled AISC is 0 (AISC is 0%).</p>
Full article ">Figure 4
<p>The mean of standardized ISC (SISC), standardized normalized difference built-up index (SNDBI), standardized greenness (SGreeness), standardized wetness (SWetness), and standardized LST (SLST) maps for the selected cities in USA on different dates.</p>
Full article ">Figure 5
<p>The mean of SISC, SNDBI, SGreeness, SWetness, and SLST maps for selected cities in Europe on different dates.</p>
Full article ">Figure 6
<p>The mean value of DSAHII obtained from the LST-ISC feature space for selected cities in Europe and USA on different dates.</p>
Full article ">Figure 7
<p>The obtained R<sup>2</sup> and root mean square error (RMSE) values between the actual and modelled DSAHII based on spectral index-based ISC (SIISC). Solid red line represents the predicted relationship between actual and modeled DSAHII.</p>
Full article ">Figure 7 Cont.
<p>The obtained R<sup>2</sup> and root mean square error (RMSE) values between the actual and modelled DSAHII based on spectral index-based ISC (SIISC). Solid red line represents the predicted relationship between actual and modeled DSAHII.</p>
Full article ">
24 pages, 23647 KiB  
Article
Diurnal to Seasonal Variations in Ocean Chlorophyll and Ocean Currents in the North of Taiwan Observed by Geostationary Ocean Color Imager and Coastal Radar
by Po-Chun Hsu, Ching-Yuan Lu, Tai-Wen Hsu and Chung-Ru Ho
Remote Sens. 2020, 12(17), 2853; https://doi.org/10.3390/rs12172853 - 2 Sep 2020
Cited by 10 | Viewed by 4662
Abstract
The waters in the north of Taiwan are located at the southern end of the East China Sea (ECS), adjacent to the Taiwan Strait (TS), and the Kuroshio region. To understand the physical dynamic process of ocean currents and the temporal and spatial [...] Read more.
The waters in the north of Taiwan are located at the southern end of the East China Sea (ECS), adjacent to the Taiwan Strait (TS), and the Kuroshio region. To understand the physical dynamic process of ocean currents and the temporal and spatial distribution of the ocean chlorophyll concentration in the north of Taiwan, hourly coastal ocean dynamics applications radar (CODAR) flow field data and geostationary ocean color imager (GOCI) data are analyzed here. According to data from December 2014 to May 2020, the water in the TS flows along the northern coast of Taiwan into the Kuroshio region with a velocity of 0.13 m/s in spring and summer through the ECS. In winter, the Kuroshio invades the ECS shelf, where the water flows into the TS through the ECS with a velocity of 0.08 m/s. The seasonal variation of ocean chlorophyll concentration along the northwestern coast of Taiwan is obvious, where the average chlorophyll concentration from November to January exceeds 2.0 mg/m3, and the lowest concentration in spring is 1.4 mg/m3. It is apparent that the tidal currents in the north of Taiwan flow eastward and westward during ebb and flood periods, respectively. Affected by the background currents, the flow velocity exhibits significant seasonal changes, namely, 0.43 m/s in summer and 0.27 m/s in winter during the ebb period and is 0.26 m/s in summer and 0.45 m/s in winter during the flood period. The chlorophyll concentration near the shore is also significantly affected by the tidal currents. Based on CODAR data, virtual drifter experiments, and GOCI data, this research provides novel and important knowledge of ocean current movement process in the north of Taiwan and indicates diurnal to seasonal variations in the ocean chlorophyll concentration, facilitating future research on the interaction between the TS, ECS, and Kuroshio. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area (dotted frame) and adjacent region. The red dot is the tidal station, the green cross is buoy, the blue square is the coastal ocean dynamics applications radar (CODAR) station, and the black arrow is the Kuroshio main path. The background color of the enlarged study area is the submarine topography obtained from the ETOPO1 1 arc-minute global relief model (doi:10.7289/V5C8276M).</p>
Full article ">Figure 2
<p>Historical in-situ survey observational ocean 10-m flow field, including multi-year statistics and the four seasons. Black arrows show the ocean current, the blue background is the seabed topography, and the black dotted line is the 100-m isobath.</p>
Full article ">Figure 3
<p>Multi-year statistics and seasonal results of the vertical profile (10–70m depth) of ocean currents in northern Taiwan: (<b>a</b>) Fifteen specific analysis locations, (<b>b</b>) areas A1 to A5, (<b>c</b>) areas B1 to B5, and (<b>d</b>) areas C1 to C5.</p>
Full article ">Figure 4
<p>(<b>a</b>) Three trajectories of drifters in the Taiwan Strait in summer. (<b>b</b>) ID: 61501380, (<b>c</b>) ID: 61502320-1, and (<b>d</b>) ID: 61502320-2.</p>
Full article ">Figure 5
<p>(<b>a</b>) Three trajectories of drifters in the Taiwan Strait in winter. (<b>b</b>) ID: 62325980, (<b>c</b>) ID: 62415670, and (<b>d</b>) ID: 62416680.</p>
Full article ">Figure 6
<p>(<b>a</b>) Three trajectories of drifters in the Taiwan Strait in spring. (<b>b</b>) ID: 63942860, (<b>c</b>) ID: 63942870, and (<b>d</b>) ID: 63942890.</p>
Full article ">Figure 7
<p>(<b>a</b>) Two trajectories of drifters in the Kuroshio region in winter and spring. (<b>b</b>) ID: 63894830 and (<b>c</b>) ID: 63943580.</p>
Full article ">Figure 8
<p>Monthly chlorophyll concentration from December 2014 to May 2020 with the CODAR flow field.</p>
Full article ">Figure 9
<p>Monthly CODAR flow field during the ebb period from December 2014 to May 2020.</p>
Full article ">Figure 10
<p>Monthly CODAR flow field during the flood period from December 2014 to May 2020.</p>
Full article ">Figure 11
<p>Long-term chlorophyll concentration map from April 2011 to May 2020 and observation area: (<b>a</b>) A (121–121.5°E; 25–25.5°N), B (121.5–122°E; 25–25.5°N, and C (coast; 122°E, 24.5–25°N), (<b>b</b>) L1 to L8 section line.</p>
Full article ">Figure 12
<p>Monthly to seasonal variations in the chlorophyll concentration. (<b>a</b>) L1 to L5 sections, (<b>b</b>) L6 to L8 sections, (<b>c</b>,<b>d</b>) monthly and yearly means of the three areas in <a href="#remotesensing-12-02853-f011" class="html-fig">Figure 11</a>a.</p>
Full article ">Figure 13
<p>Distribution of the monthly average chlorophyll concentration from April 2011 to May 2020 along the northern coast of Taiwan. The black dotted lines and the white line segments represent the 50 m and 100 m isobaths, respectively.</p>
Full article ">Figure 14
<p>Short time period of (<b>a</b>) chlorophyll concentration and (<b>b</b>) SST variations with ocean currents from Feb 19, 2016, 1:00 to 7:00 (UTC). The areas A, B, and C in the time series diagram were selected as black boxes inside the map.</p>
Full article ">Figure 15
<p>Short time period of (<b>a</b>) chlorophyll concentration and (<b>b</b>) sea surface temperature (SST) variations with ocean currents from July 20, 2017, 0:00 to 7:00 (UTC). The areas A, B, and C in the (<b>c</b>) time series diagram were selected as black boxes inside the map.</p>
Full article ">Figure 16
<p>Simulation of virtual drifter trajectory using CODAR flow field data, simulation time of drifter on (<b>a</b>,<b>b</b>) 11:00 (UTC) 18 July, 2015, (<b>c</b>) from 11:00 (UTC) 8 July to 05:00 (UTC) 9 July, 2015, and (<b>d</b>–<b>f</b>) 0:00 (UTC) 19 February, 2016.</p>
Full article ">Figure 17
<p>A schematic diagram of the main characteristics of the flow field in the north of Taiwan.</p>
Full article ">
34 pages, 11625 KiB  
Article
Modelling Water Colour Characteristics in an Optically Complex Nearshore Environment in the Baltic Sea; Quantitative Interpretation of the Forel-Ule Scale and Algorithms for the Remote Estimation of Seawater Composition
by Sławomir B. Woźniak and Justyna Meler
Remote Sens. 2020, 12(17), 2852; https://doi.org/10.3390/rs12172852 - 2 Sep 2020
Cited by 7 | Viewed by 3785
Abstract
The paper presents the modelling results of selected characteristics of water-leaving light in an optically complex nearshore marine environment. The modelled quantities include the spectra of the remote-sensing reflectance Rrs(λ) and the hue angle α, which quantitatively describes the colour of [...] Read more.
The paper presents the modelling results of selected characteristics of water-leaving light in an optically complex nearshore marine environment. The modelled quantities include the spectra of the remote-sensing reflectance Rrs(λ) and the hue angle α, which quantitatively describes the colour of water visible to the unaided human eye. Based on the latter value, it is also possible to match water-leaving light spectra to classes on the traditional Forel-Ule water colour scale. We applied a simple model that assumes that seawater is made up of chemically pure water and three types of additional optically significant components: particulate organic matter (POM) (which includes living phytoplankton), particulate inorganic matter (PIM), and chromophoric dissolved organic matter (CDOM). We also utilised the specific inherent optical properties (SIOPs) of these components, determined from measurements made at a nearshore location on the Gulf of Gdańsk. To a first approximation, the simple model assumes that the Rrs spectrum can be described by a simple function of the ratio of the light backscattering coefficient to the sum of the light absorption and backscattering coefficients (u = bb/(a + bb)). The model calculations illustrate the complexity of possible relationships between the seawater composition and the optical characteristics of an environment in which the concentrations of individual optically significant components may be mutually uncorrelated. The calculations permit a quantitative interpretation of the Forel-Ule scale. The following parameters were determined for the several classes on this scale: typical spectral shapes of the u ratio, possible ranges of the total light absorption coefficient in the blue band (a(440)), as well as upper limits for concentrations of total and organic and inorganic fractions of suspended particles (SPM, POM and PIM concentrations). The paper gives examples of practical algorithms that, based on a given Rrs spectrum or some of its features, and using lookup tables containing the modelling results, enable to estimate the approximate composition of seawater. Full article
(This article belongs to the Special Issue Baltic Sea Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Diagrams illustrating the main modelling and computational approaches used in this paper: (<b>a</b>) Modelling remote-sensing reflectance spectra <span class="html-italic">R<sub>rs</sub></span>(λ) with the <span class="html-italic">Hydrolight</span> model; (<b>b</b>) Modelling spectra of the <span class="html-italic">u</span> ratio and the remote-sensing reflectance (<span class="html-italic">u</span>(λ), <span class="html-italic">R<sub>rs</sub></span>(λ)) and determining the membership in a Forel-Ule class (FU index) using the simple water colour model; (<b>c</b>) Estimating the seawater composition by comparing the input <span class="html-italic">R<sub>rs</sub></span> spectrum with simplified modelling results set out in an extensive lookup table.</p>
Full article ">Figure 2
<p>Block diagram of the proposed simple water colour model.</p>
Full article ">Figure 3
<p>(<b>a</b>) Sensitivity of the human eye as described by spectral functions known as “CIE 1931 colour matching functions” (defined by Commission Internationale de l’Eclairage [<a href="#B43-remotesensing-12-02852" class="html-bibr">43</a>]); (<b>b</b>) diagram illustrating the correspondence between ranges of the hue angle α and the Forel-Ule scale classes (index FU) (acc. to [<a href="#B3-remotesensing-12-02852" class="html-bibr">3</a>]); (<b>c</b>) examples of colours that may represent the Forel-Ule scale classes (acc. to [<a href="#B3-remotesensing-12-02852" class="html-bibr">3</a>]).</p>
Full article ">Figure 4
<p>Relationships between quantities characterising the biogeochemical composition of seawater samples from the nearshore environment in the Baltic Sea (the Sopot pier): (<b>a</b>) Ratio of POM/SPM (“composition ratio”) vs. SPM concentration; the horizontal dashed lines represent the proposed division of all cases into five classes (I–V); (<b>b</b>) Variability of <span class="html-italic">a<sub>g</sub></span>(440) (absorption coefficient of CDOM at 440 nm) vs. SPM concentration; (<b>c</b>) Ratio of Chl<span class="html-italic">a</span>/POM to POM/SPM.</p>
Full article ">Figure 4 Cont.
<p>Relationships between quantities characterising the biogeochemical composition of seawater samples from the nearshore environment in the Baltic Sea (the Sopot pier): (<b>a</b>) Ratio of POM/SPM (“composition ratio”) vs. SPM concentration; the horizontal dashed lines represent the proposed division of all cases into five classes (I–V); (<b>b</b>) Variability of <span class="html-italic">a<sub>g</sub></span>(440) (absorption coefficient of CDOM at 440 nm) vs. SPM concentration; (<b>c</b>) Ratio of Chl<span class="html-italic">a</span>/POM to POM/SPM.</p>
Full article ">Figure 5
<p>Specific inherent optical properties of particulate matter in samples from the nearshore environment at Sopot: (<b>a</b>) Spectra of the mass-specific absorption coefficient <span class="html-italic">a<sub>p</sub></span>* for all samples; (<b>c</b>) example of the relationship between <span class="html-italic">a<sub>p</sub></span>* and the composition ratio (POM/SPM) for a light wavelength of 440 nm (the best-fit linear function for the data points is shown); (<b>e</b>) Spectra of coefficient <span class="html-italic">a<sub>p</sub></span>* for classes II–V (see <a href="#remotesensing-12-02852-f004" class="html-fig">Figure 4</a>) and estimated spectra for pure POM and pure PIM fractions of SPM; (<b>b</b>,<b>d</b>,<b>f</b>) The same as (<b>a</b>,<b>c</b>,<b>e</b>) but for the mass-specific backscattering coefficient <span class="html-italic">b<sub>bp</sub></span>* (panel <b>d</b> exemplifies the relevant relationship for a light wavelength of 550 nm).</p>
Full article ">Figure 6
<p>Individual spectra of the normalised CDOM absorption coefficient (values normalised to 440 nm, <span class="html-italic">a<sub>g</sub></span>(λ)/<span class="html-italic">a<sub>g</sub></span>(440)) for samples from the nearshore environment at Sopot, and the function by which these spectra can be approximated.</p>
Full article ">Figure 7
<p>(<b>a</b>) Estimated spectra of the remote-sensing reflectance <span class="html-italic">R<sub>rs</sub></span> for all the water samples from the nearshore environment at Sopot. These spectra were modelled on the basis of measured water IOPs using the full version of the <span class="html-italic">Hydrolight</span> radiative transfer model; (<b>b</b>) Comparison of hue angles calculated from photographs of a submerged Secchi disc (α<span class="html-italic"><sub>SD</sub></span>), and calculated from modelled spectra of <span class="html-italic">R<sub>rs</sub></span> (α); (<b>c</b>) Approximate relationship between the remote-sensing reflectance <span class="html-italic">R<sub>rs</sub></span> and the <span class="html-italic">u</span> ratio calculated from the available data.</p>
Full article ">Figure 7 Cont.
<p>(<b>a</b>) Estimated spectra of the remote-sensing reflectance <span class="html-italic">R<sub>rs</sub></span> for all the water samples from the nearshore environment at Sopot. These spectra were modelled on the basis of measured water IOPs using the full version of the <span class="html-italic">Hydrolight</span> radiative transfer model; (<b>b</b>) Comparison of hue angles calculated from photographs of a submerged Secchi disc (α<span class="html-italic"><sub>SD</sub></span>), and calculated from modelled spectra of <span class="html-italic">R<sub>rs</sub></span> (α); (<b>c</b>) Approximate relationship between the remote-sensing reflectance <span class="html-italic">R<sub>rs</sub></span> and the <span class="html-italic">u</span> ratio calculated from the available data.</p>
Full article ">Figure 8
<p>Examples of modelling results obtained with the simple water colour model: <span class="html-italic">u</span> ratio spectra for cases of pure PIM (<b>a</b>) or pure POM (<b>b</b>), calculated for various concentrations of PIM or POM (between 0.1 and 100 g m<sup>−3</sup>) and for three assumed constant concentrations of CDOM (represented by the absorption coefficient <span class="html-italic">a<sub>g</sub></span>(440): 0 m<sup>−1</sup> (no CDOM in the water), 0.5 m<sup>−1</sup> (moderate CDOM concentration) and 1.6 m<sup>−1</sup> (very high CDOM concentration) (see the descriptions in the panel legends).</p>
Full article ">Figure 9
<p>Characterization of the modelled <span class="html-italic">u</span> ratio spectra: (<b>a</b>) Average shapes of the normalised <span class="html-italic">u</span> ratio (<span class="html-italic">u<sub>norm</sub></span>(λ)) representing different classes of the Forel-Ule scale (normalized to the integral expressing the area under the curve across the entire spectral range); (<b>b</b>) Examples of average <span class="html-italic">u</span> ratio spectra, as well as ranges of the minimum and maximum values recorded for three classes on the Forel-Ule scale (FU = 7, 9 and 17).</p>
Full article ">Figure 10
<p>Examples of relationships derived from the results of the simple water colour model compared with empirical data from the Sopot pier: (<b>a</b>,<b>b</b>) observed “strong” statistical relationships involving seawater IOPs: relationships between <span class="html-italic">a</span>(440) and α, and between <span class="html-italic">b<sub>b</sub></span>(620) and <span class="html-italic">u</span>(620); (<b>c</b>) Diagram illustrating possible combinations of <span class="html-italic">u</span>(620) and α values depicted by the model (in that panel, the various curves exemplify relationships between <span class="html-italic">u</span>(620) and α, recorded for fixed (constant) values of <span class="html-italic">a<sub>g</sub></span>(440), and for varying pure PIM or pure POM concentrations in water); (<b>d</b>) Diagram as in (<b>c</b>) but also showing points representing the Sopot pier data; ranges of the hue angle α representing the two extreme Forel-Ule scale classes in the dataset are shown by vertical dashed lines (for FU classes 7 and 17).</p>
Full article ">Figure 11
<p>Diagrams illustrating possible values of main model input data describing the seawater composition (SPM and <span class="html-italic">a<sub>g</sub></span>(440)) vs. quantities that may generally characterise the remote-sensing reflectance spectrum, i.e., the hue angle α (describing the colour seen by the human eye) (panel <b>a</b>,<b>b</b>), and <span class="html-italic">u</span>(620) and <span class="html-italic">u</span>(440) (describing the magnitude of <span class="html-italic">R<sub>rs</sub></span> in the red and blue parts of the spectrum) (panels from <b>c</b>–<b>f</b>). The figures also present the data from the Sopot pier.</p>
Full article ">Figure 12
<p>Sets of diagrams plotted for three Forel-Ule scale classes presenting relationships between quantities describing seawater composition (SPM, <span class="html-italic">a<sub>g</sub></span>(440) and POM/SPM) and values of <span class="html-italic">u</span>(620) or ratio of <span class="html-italic">u</span>(650)/<span class="html-italic">u</span>(675); (<b>a</b>) For FU = 7; (<b>b</b>) For FU = 9; (<b>c</b>) For FU = 17.</p>
Full article ">Figure 13
<p>(<b>a</b>–<b>d</b>) Simple statistical formulas for calculating SPM and POM concentrations as functions of either <span class="html-italic">R<sub>rs</sub></span>(625) or <span class="html-italic">R<sub>rs</sub></span>(490)/<span class="html-italic">R<sub>rs</sub></span>(625); the original formulas were developed earlier, based on data from different regions of the Baltic Sea (combination of formulas taken from references [<a href="#B21-remotesensing-12-02852" class="html-bibr">21</a>,<a href="#B23-remotesensing-12-02852" class="html-bibr">23</a>]); similar but new versions of these formulas are fitted to the new dataset from the Sopot pier (the formulas are given in each panel); (<b>e</b>,<b>f</b>) Plots of curves representing calculated POM/SPM values using these formulas; all the formulas are given against the background of points representing the new dataset from the Sopot pier (measured levels of SPM and POM, and values of <span class="html-italic">R<sub>rs</sub></span>(λ) obtained with the <span class="html-italic">Hydrolight</span> model).</p>
Full article ">Figure 14
<p>Comparison of retrieved and measured biogeochemical quantities for different variants of the new algorithms or sets of simple statistical formulas: (<b>a</b>) Results obtained with Algorithm I; (<b>b</b>) Results for Algorithm II; (<b>c</b>) Results for the set of statistical formulas using <span class="html-italic">R<sub>rs</sub></span>(625) only; (<b>d</b>) Result for the set of statistical formulas using only the ratio <span class="html-italic">R<sub>rs</sub></span>(490)/<span class="html-italic">R<sub>rs</sub></span>(625).</p>
Full article ">
16 pages, 8926 KiB  
Article
The Dimming of Lights in China during the COVID-19 Pandemic
by Christopher D. Elvidge, Tilottama Ghosh, Feng-Chi Hsu, Mikhail Zhizhin and Morgan Bazilian
Remote Sens. 2020, 12(17), 2851; https://doi.org/10.3390/rs12172851 - 2 Sep 2020
Cited by 71 | Viewed by 10241
Abstract
A satellite survey of the cumulative radiant emissions from electric lighting across China reveals a large radiance decline in lighting from December 2019 to February 2020—the peak of the lockdown established to suppress the spread of COVID-19 infections. To illustrate the changes, an [...] Read more.
A satellite survey of the cumulative radiant emissions from electric lighting across China reveals a large radiance decline in lighting from December 2019 to February 2020—the peak of the lockdown established to suppress the spread of COVID-19 infections. To illustrate the changes, an analysis was also conducted on a reference set from a year prior to the pandemic. In the reference period, the majority (62%) of China’s population lived in administrative units that became brighter in March 2019 relative to December 2018. The situation reversed in February 2020, when 82% of the population lived in administrative units where lighting dimmed as a result of the pandemic. The dimming has also been demonstrated with difference images for the reference and pandemic image pairs, scattergrams, and a nightly temporal profile. The results indicate that it should be feasible to monitor declines and recovery in economic activity levels using nighttime lighting as a proxy. Full article
(This article belongs to the Special Issue Remote Sensing of Night-Time Light)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Marking of grid cells having less than two cloud-free coverages in each of the four study months.</p>
Full article ">Figure 2
<p>Average radiance transect across background areas in Western China.</p>
Full article ">Figure 3
<p>Radiance transect across Harbin, China showing the brightening effect of snow cover present in December 2019. The December 2018 radiances were consistently dimmer than the December 2019 ones.</p>
Full article ">Figure 4
<p>Advanced Microwave Sounding Unit-A (AMSU-A) snow cover tallies matched to the dates used in the day-night band (DNB) monthly cloud-free composites.</p>
Full article ">Figure 5
<p>Dependence of grid cell size on latitude. Ground footprint sizes for 15 arc-second grid cells were largest at the equator and declined moving in either direction away from the equator.</p>
Full article ">Figure 6
<p>Examples of February 2020 dimming in five of China’s major cities.</p>
Full article ">Figure 7
<p>Map of the % changes in brightness for the reference period (brightness in December 2018 minus brightness in March 2019). The administrative unit boundaries are shown in black, and population size classes of each are indicated with circles with five diameters placed at the administrative unit’s centroid. Administrative units, where lighting was completely masked out, are gray. There were three classes for increased brightness in March 2019, indicated by yellow, light green, and dark green. The orange and red classes indicated declines in brightness in February 2020. Yellow and green classes predominated in the central core of China’s densely populated heartland. (The tabular data used to generate the maps see <a href="#app1-remotesensing-12-02851" class="html-app">Supplementary Materials</a>).</p>
Full article ">Figure 8
<p>Map of the % changes in brightness for the study pair spanning the pandemic (the brightness in December 2019 minus the brightness in February 2020). Decline in brightness, indicated by orange and red circles, predominated in the central core of China’s densely populated heartland. (The tabular data used to generate the maps see <a href="#app1-remotesensing-12-02851" class="html-app">Supplementary Materials</a>).</p>
Full article ">Figure 9
<p>Population histograms for a gradation of brightening and dimming levels for the reference period (blue) and study period (orange). The population numbers shift between the two periods. In the reference period, the majority (62%) lived in administrative units that became brighter in March 2019 relative to in December 2018. The situation reversed in February 2020, when 82% of the population lived in administrative units where lighting dimmed as a result of the pandemic.</p>
Full article ">Figure 10
<p>Pre-pandemic and pandemic scattergrams of the monthly average radiances for grid cells in Beijing.</p>
Full article ">Figure 11
<p>Percent change in monthly sum-of-lights (SOL) from the pandemic study pair versus confirmed COVID-19 cases per million people.</p>
Full article ">Figure 12
<p>Temporal profile of monthly average radiances from the DNB cloud-free composite time series for a grid cell in Beijing. Dimming during the pandemic is expressed by a shard dip in radiance in February 2020. Similar dips were found in other months.</p>
Full article ">Figure 13
<p>A temporal slice from a north-south transect across Beijing showing dark vertical stripes in months, where dimming was spatially extensive.</p>
Full article ">Figure 14
<p>Comparison of Beijing DNB monthly composite images having spatially extensive dimming relative to that in December 2019.</p>
Full article ">Figure 15
<p>November 2015 cloud-free coverages. This is one of the months showing spatially extensive DNB dimming. A large part of Eastern China had low numbers of cloud-free coverages. Zero cloud-free coverages are in red, and one cloud-free coverage is in green.</p>
Full article ">Figure 16
<p>Nightly temporal profile of DNB radiances for the grid cell covering Shanghai Disneyland.</p>
Full article ">
23 pages, 27295 KiB  
Article
Illuminating the Spatio-Temporal Evolution of the 2008–2009 Qaidam Earthquake Sequence with the Joint Use of Insar Time Series and Teleseismic Data
by Simon Daout, Andreas Steinberg, Marius Paul Isken, Sebastian Heimann and Henriette Sudhaus
Remote Sens. 2020, 12(17), 2850; https://doi.org/10.3390/rs12172850 - 2 Sep 2020
Cited by 14 | Viewed by 4887
Abstract
Inferring the geometry and evolution of an earthquake sequence is crucial to understand how fault systems are segmented and interact. However, structural geological models are often poorly constrained in remote areas and fault inference is an ill-posed problem with a reliability that depends [...] Read more.
Inferring the geometry and evolution of an earthquake sequence is crucial to understand how fault systems are segmented and interact. However, structural geological models are often poorly constrained in remote areas and fault inference is an ill-posed problem with a reliability that depends on many factors. Here, we investigate the geometry of the Mw 6.3 2008 and 2009 Qaidam earthquakes, in northeast Tibet, by combining InSAR time series and teleseismic data. We conduct a multi-array back-projection analysis from broadband teleseismic data and process three overlapping Envisat tracks covering the two earthquakes to extract the spatio-temporal evolution of seismic ruptures. We then integrate both geodetic and seismological data into a self-consistent kinematic model of the earthquake sequence. Our results constrain the depth and along-strike segmentation of the thrust-faulting sequence. The 2008 earthquake ruptured a ∼32° north-dipping fault that roots under the Olongbulak pop-up structure at ∼12 km depth and fault slip evolved post-seismically in a downdip direction. The 2009 earthquake ruptured three south-dipping high-angle thrusts and propagated from ∼9 km depth to the surface and bilaterally along the south-dipping segmented 55–75° high-angle faults of the Olonbulak pop-up structure that displace basin deformed sedimentary sequences above Paleozoic bedrock. Our analysis reveals that the inclusion of the post-seismic afterslip into modelling is beneficial in the determination of fault geometry, while teleseismic back-projection appears to be a robust tool for identifying rupture segmentation for moderate-sized earthquakes. These findings support the hypothesis that the Qilian Shan is expanding southward along a low-angle décollement that partitions the oblique convergence along multiple flower and pop-up structures. Full article
(This article belongs to the Section Remote Sensing in Geology, Geomorphology and Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Seismotectonic setting of the Olongbulak ranges in the northeastern part of the Tibetan Plateau superimposed on a false colour Landsat 8 satellite image (U.S. Geological Survey, <a href="https://earthexplorer.usgs.gov/" target="_blank">https://earthexplorer.usgs.gov/</a>). The uplifted Paleogene, Cretaceous and Jurassic deposits form a distinctive yellow band within the Olongbulak pop-up, between dark Jurassic to Cambrian bedrock. Mapped faults from our own analysis are shown as black lines. The locations and focal mechanisms of the three <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>w</mi> </msub> <mo>&gt;</mo> <mn>5.2</mn> </mrow> </semantics></math> earthquakes of the 2008–2009 Qaidam sequence are shown with their mechanisms as lower-hemisphere stereo-projections [<a href="#B64-remotesensing-12-02850" class="html-bibr">64</a>] (2008 and 2009 in green and blue, respectively). Grey transparent circles show the historical seismicity from 2003 to 2018 from the U.S. Geological Survey catalogue (M &gt; 4.0) and a regional Chinese catalogue (M &gt; 2.0, <a href="http://data.earthquake.cn/sjfw/index.html?PAGEID=websourcesearch&amp;code=D10000&amp;type=&amp;count=&amp;time=" target="_blank">http://data.earthquake.cn</a>).</p>
Full article ">Figure 2
<p>(<b>a</b>) Parametric time series (TS) decomposition for four pixels of the tracks T319, T047 and T455 marked by crosses in (<b>b</b>). (<b>b</b>) Line-of-sight (LOS) surface displacement maps wrapped between −20 mm and 20 mm associated with the 2008 (<b>b<sub>1</sub></b>) and 2009 (<b>b<sub>3</sub></b>) co-seismic surface displacements, and the corresponding afterslip surface displacements over a 4 month interval (<b>b<sub>2</sub></b>,<b>b<sub>4</sub></b>) extracted from the parametric decomposition of tracks T319 (top), T047 (middle), and T455 (bottom). Black lines show mapped fault traces of the NQT and Xietie–Shan thrust (XT).</p>
Full article ">Figure 3
<p>Co-seismic Differential interferograms and stacks from track 319 (top), 047 (middle), 455 (bottom). (<b>a</b>) 2008 coseismic Differential interferograms. (<b>b</b>) 2009 coseismic Differential interferograms. (<b>c</b>) Stack of 2008 coseismic Differential interferograms. (<b>d</b>) Stack of 2009 coseismic Differential interferograms.</p>
Full article ">Figure 4
<p>Seismic back-projection for the 10 November 2008 and the 28 August 2009 Haixi earthquakes. (<b>a</b>) P wave beampower (filled red curve) and maximum semblance from both LF and HF emissions (white filled curve) as a function of time for the 2008 (left) and 2009 (right) earthquakes. (<b>b</b>) Back-projected stack amplitudes shown for given time intervals and grid depths for the two earthquakes (green: 2008, blue: 2009) with warm colours being associated with higher semblance. Semblance peaks are numbered 1, 2, and, 3.</p>
Full article ">Figure 5
<p>Posterior models for the 10 November 2008 and 28 August 2009 earthquakes and their post-seismic deformation from optimisations obtained with fixed dip directions. (<b>a</b>) Best-fitting posterior geometries in map view for the 2008 north-dipping co-seismic (coral red), the 2008 north-dipping post-seismic (orange), the 2008 south-dipping co-seismic (pink), the 2008 south-dipping post-seismic (magenta), and for the three segments of the 2009 co-seismic (dark blue, cyan, blue) fault inferences. (<b>b</b>) As for top figure, but along the N22° E profile perpendicular to the Olongbulak Shan marked A-A’ in (<b>a</b>) and with interpreted fault geometry in the middle/upper crust. Based on the coplanarity of the 2008 co- and post-seismic slip, we interpret that the 2008 earthquake ruptured a 32° north-dipping plane at 12 km depth rooting below the Olongbulak Shan and that the after-slip was mainly down-dip of the rupture plane. The 2009 earthquake broke three distinct 55–75° high-angle south-dipping back-thrust segments of the Olongbulak pop-up structure.</p>
Full article ">Figure 6
<p>(<b>a</b>) Summary of the posterior PDFs for the optimisation of one north-dipping rectangular fault in agreement with the co-seismic (coral red) and post-seismic (orange) surface displacements of the 2008 earthquake. Dashed vertical lines are best-fitting models. (<b>b</b>) Same as (<b>a</b>), but for the optimisation of one south-dipping rectangular fault in agreement with the co-seismic (coral red) and post-seismic (orange) surface displacements of the 2008 earthquake.</p>
Full article ">Figure 7
<p>Posterior models for the 10 November 2008 earthquake obtained from the optimisation with a free dip-angle orientation based on both stack of long-baselines interferograms and teleseismic data. (<b>a</b>) Comparison between data and model from the optimisation. Left: sub-sampled surface displacements for tracks 319 (top), 047 (middle), and 455 (bottom). Middle: modeled displacements associated with the maximum likelihood of the posterior probability distribution. Right: residuals between the forward model and the observations. (<b>b</b>) Sequence plots for selected parameters of the optimisation with a color-scale that varies depending on the misfit from high (blue) to low (red). North-dipping and south-dipping bimodal solutions are explored simultaneously, as sampling in all regions is encouraged by random offsets. The Bayesian bootstrap inversion converges towards an <math display="inline"><semantics> <mrow> <mn>9.8</mn> <mo>±</mo> <mn>1</mn> </mrow> </semantics></math> km deep plane dipping towards the north with an angle of <math display="inline"><semantics> <mrow> <mn>32</mn> <mo>±</mo> <msup> <mn>10</mn> <mo>∘</mo> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Summary of the posterior PDFs for the optimisation of the three rectangular faults (middle, east, west) in agreement with the co-seismic surface displacements of 2009 earthquake (<b>a</b>) and with a stack of co-seismic interferograms of 2009 earthquake (<b>b</b>). Dashed vertical lines are best-fitting models.</p>
Full article ">Figure 9
<p>Comparison between posterior Probability Density Functions (PDFs) of the 2008 earthquake parameters that were obtained from the optimisation of InSAR co-seismic TS data + teleseismic data (blue) and DInSAR interferograms + teleseismic data (coral red). Dashed vertical lines are best-fitting models.</p>
Full article ">Figure 10
<p>Three-dimensional block diagram of the proposed geometry for the North Qaidam thrust system superimposed on a digital elevation model (3× vertically exaggerated), along with the cumulative LOS displacement map from descending track 319 and the 10 November 2008 and 28 August 2009 co-seismic LOS displacements profiles from <a href="#remotesensing-12-02850-f002" class="html-fig">Figure 2</a>. Insert at the bottom left shows interpreted conservation of motion vectors across the fault-system, where high-angle thrusts and folds vertically partition the horizontal shortening transferred from the South Qilian Shan to the Qaidam basin.</p>
Full article ">
27 pages, 8302 KiB  
Article
Building Extraction from Airborne LiDAR Data Based on Min-Cut and Improved Post-Processing
by Ke Liu, Hongchao Ma, Haichi Ma, Zhan Cai and Liang Zhang
Remote Sens. 2020, 12(17), 2849; https://doi.org/10.3390/rs12172849 - 2 Sep 2020
Cited by 12 | Viewed by 3957
Abstract
Building extraction from LiDAR data has been an active research area, but it is difficult to discriminate between buildings and vegetation in complex urban scenes. A building extraction method from LiDAR data based on minimum cut (min-cut) and improved post-processing is proposed. To [...] Read more.
Building extraction from LiDAR data has been an active research area, but it is difficult to discriminate between buildings and vegetation in complex urban scenes. A building extraction method from LiDAR data based on minimum cut (min-cut) and improved post-processing is proposed. To discriminate building points on the intersecting roof planes from vegetation, a point feature based on the variance of normal vectors estimated via low-rank subspace clustering (LRSC) technique is proposed, and non-ground points are separated into two subsets based on min-cut after filtering. Then, the results of building extraction are refined via improved post-processing using restricted region growing and the constraints of height, the maximum intersection angle and consistency. The maximum intersection angle constraint removes large non-building point clusters with narrow width, such as greenbelt along streets. Contextual information and consistency constraint are both used to eliminate inhomogeneity. Experiments of seven datasets, including five datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS), one dataset with high-density point data and one dataset with dense buildings, verify that most buildings, even with curved roofs, are successfully extracted by the proposed method, with over 94.1% completeness and a minimum 89.8% correctness at the per-area level. In addition, the proposed point feature significantly outperforms the comparison alternative and is less sensitive to feature threshold in complex scenes. Hence, the extracted building points can be used in various applications. Full article
Show Figures

Figure 1

Figure 1
<p>Algorithm flowchart.</p>
Full article ">Figure 2
<p>Normal vector. (<b>a</b>) PCA-based normal vector of a cube; (<b>b</b>) PCA-based normal vector of a building roof; (<b>c</b>) LRSC -based normal vector of a cube; (<b>d</b>) LRSC -based normal vector of a building roof.</p>
Full article ">Figure 3
<p>Visualization of<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>f</mi> <mi>v</mi> </msub> </mrow> </semantics></math> (rendered by the value of normalized<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>f</mi> <mi>v</mi> </msub> </mrow> </semantics></math>). (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>v</mi> </msub> </mrow> </semantics></math> of [<a href="#B12-remotesensing-12-02849" class="html-bibr">12</a>]; (<b>b</b>) the proposed<math display="inline"><semantics> <mrow> <mo> </mo> <msub> <mi>f</mi> <mi>v</mi> </msub> </mrow> </semantics></math>; (<b>c</b>) Orthophoto.</p>
Full article ">Figure 4
<p>Profile of a building located on a slope. (<b>a</b>) Before restricted region growing; (<b>b</b>) After restricted region growing.</p>
Full article ">Figure 5
<p>Concept of intersection angle. (<b>a</b>) Building extraction after restricted region growing; (<b>b</b>) Orthophoto; (<b>c</b>) Concept of intersection angles; (<b>d</b>) Calculation of the maximum intersection angle.</p>
Full article ">Figure 6
<p>Roof terrace. (<b>a</b>) Orthophoto; (<b>b</b>) Point data. Yellow lines are reference vectors provided by ISPRS. The approximate location of profile is indicated by the green line; (<b>c</b>) Profile before consistency constraint; (<b>d</b>) Profile after consistency constraint.</p>
Full article ">Figure 7
<p>Two trees with dense leaves. (<b>a</b>) Orthophoto; (<b>b</b>) Point data. The approximate location of profile is indicated by the green line; (<b>c</b>) Profile before consistency constraint; (<b>d</b>) Profile after consistency constraint.</p>
Full article ">Figure 8
<p>Search path in four directions.</p>
Full article ">Figure 9
<p>Test datasets. (<b>a</b>) Area 1; (<b>b</b>) Area 2; (<b>c</b>) Area 3; (<b>d</b>) Area 4; (<b>e</b>) Area 5.</p>
Full article ">Figure 9 Cont.
<p>Test datasets. (<b>a</b>) Area 1; (<b>b</b>) Area 2; (<b>c</b>) Area 3; (<b>d</b>) Area 4; (<b>e</b>) Area 5.</p>
Full article ">Figure 10
<p>Initial results of building extraction based on graph cuts and height constraints. (<b>a</b>) Results of Area 1; (<b>b</b>) Results of Area 2; (<b>c</b>) Results of Area 3; (<b>d</b>) Results of Area 4; (<b>e</b>) Results of Area 5.</p>
Full article ">Figure 11
<p>Visualization of building extraction results at the per-pixel level and error factors. (<b>a</b>) Area 1; (<b>b</b>) Area 2; (<b>c</b>) Area 3; (<b>d</b>) Area 4; (<b>e</b>) Area 5; (<b>f</b>) Details of A in Area 1; (<b>g</b>) Details of B in Area 1; (<b>h</b>) Details of C in Area 1; (<b>i</b>) Details of D in Area 2; (<b>j</b>) Details of E in Area 3; (<b>k</b>) Details of F in Area 4.</p>
Full article ">Figure 11 Cont.
<p>Visualization of building extraction results at the per-pixel level and error factors. (<b>a</b>) Area 1; (<b>b</b>) Area 2; (<b>c</b>) Area 3; (<b>d</b>) Area 4; (<b>e</b>) Area 5; (<b>f</b>) Details of A in Area 1; (<b>g</b>) Details of B in Area 1; (<b>h</b>) Details of C in Area 1; (<b>i</b>) Details of D in Area 2; (<b>j</b>) Details of E in Area 3; (<b>k</b>) Details of F in Area 4.</p>
Full article ">Figure 12
<p>Test datasets. (<b>a</b>) New Zealand dataset; (<b>b</b>) Utah dataset.</p>
Full article ">Figure 13
<p>Building extraction at the per-pixel level. (<b>a</b>) Results of New Zealand dataset; (<b>b</b>) Details in New Zealand dataset; (<b>c</b>) Results of Utah dataset.</p>
Full article ">Figure 14
<p>Q (quality) metric on a per-area level of extraction results. (<b>a</b>) Q of Area 1; (<b>b</b>) Q of Area 2; (<b>c</b>) Q of Area 3; (<b>d</b>) Average Q.</p>
Full article ">Figure 14 Cont.
<p>Q (quality) metric on a per-area level of extraction results. (<b>a</b>) Q of Area 1; (<b>b</b>) Q of Area 2; (<b>c</b>) Q of Area 3; (<b>d</b>) Average Q.</p>
Full article ">Figure 15
<p>Results of different <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and different weights for features. (<b>a</b>) Results of different <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) Results of different weights for each feature.</p>
Full article ">Figure 15 Cont.
<p>Results of different <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math> and different weights for features. (<b>a</b>) Results of different <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>0</mn> </msub> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mi>c</mi> </msub> </mrow> </semantics></math>; (<b>b</b>) Results of different weights for each feature.</p>
Full article ">
18 pages, 6864 KiB  
Article
Spatio-Temporal Characteristics for Moon-Based Earth Observations
by Jing Huang, Huadong Guo, Guang Liu, Guozhuang Shen, Hanlin Ye, Yu Deng and Runbo Dong
Remote Sens. 2020, 12(17), 2848; https://doi.org/10.3390/rs12172848 - 2 Sep 2020
Cited by 15 | Viewed by 3456
Abstract
Spatio-temporal characteristics are the crucial conditions for Moon-based Earth observations. In this study, we established a Moon-based Earth observation geometric model by considering the intervisibility condition between a Moon-based platform and observed points on the Earth, which can analyze the spatio-temporal characteristics of [...] Read more.
Spatio-temporal characteristics are the crucial conditions for Moon-based Earth observations. In this study, we established a Moon-based Earth observation geometric model by considering the intervisibility condition between a Moon-based platform and observed points on the Earth, which can analyze the spatio-temporal characteristics of the observations of Earth’s hemisphere. Furthermore, a formula for the spherical cap of the Earth visibility region on the Moon is analytically derived. The results show that: (1) the observed Earth spherical cap has a diurnal period and varies with the nadir point. (2) All the annual global observation durations in different years show two lines that almost coincide with the Arctic circle and the Antarctic circle. Regions between the two lines remain stable, but the observation duration of the South pole and North pole changes every 18.6 years. (3) With the increase of the line-of-sight minimum observation elevation angle, the area of an intervisible spherical cap on the lunar surface is obviously decreased, and this cap also varies with the distance between the barycenter of the Earth and the barycenter of the Moon. In general, this study reveals the effects of the elevation angle on the spatio-temporal characteristics and additionally determines the change of area where the Earth’s hemisphere can be observed on the lunar surface; this information can provide support for the accurate calculation of Moon-based Earth hemisphere observation times. Full article
(This article belongs to the Special Issue Lunar Remote Sensing and Applications)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric relationship between the Earth’s surface features and a Moon-based platform. Not drawn to scale.</p>
Full article ">Figure 2
<p>Earth spherical cap observed from a Moon-based sensor.</p>
Full article ">Figure 3
<p>Intervisibility geometry of the Moon and the Earth.</p>
Full article ">Figure 4
<p>Global observation coverage from (0°,0°) on the lunar surface at (<b>a</b>) 00:00; (<b>b</b>) 08:00; (<b>c</b>) 16:00 UTC on 1 January; (<b>d</b>) 00:00; (<b>e</b>) 08:00; (<b>f</b>) 16:00 UTC on 2 January 2021. The light part means that these regions are visible for a Moon-based sensor, which changes with the positions of nadir point; the dark part represents the opposite.</p>
Full article ">Figure 5
<p>Dependence of the Earth spherical cap area on the Earth–Moon distance and minimum Earth observation elevation angle.</p>
Full article ">Figure 6
<p>Influences of Earth–Moon distance on the Earth spherical cap (<b>a</b>) minimum Earth observation elevation angle is 10°, (<b>b</b>) minimum Earth observation elevation angle is 10°, 20°, 30°.</p>
Full article ">Figure 7
<p>Global observation duration distribution from (0°,0°) on the lunar surface during (<b>a</b>) 2021, (<b>b</b>)2025, (<b>c</b>) 2029, (<b>d</b>) 2031, (<b>e</b>) 2035, and (<b>f</b>) 2039.</p>
Full article ">Figure 8
<p>Influences of barycenter separation distance on the lunar intervisible spherical cap (<b>a</b>) line-of-sight minimum observation elevation angle is 10°, (<b>b</b>) line-of-sight minimum observation elevation angle is 10°, 20°, 30°.</p>
Full article ">Figure 9
<p>The area on the Moon where sensor equipped on can observe hemisphere of the Earth at (<b>a</b>) 00:00, (<b>b</b>) 04:00, (<b>c</b>) 08:00, (<b>d</b>) 12:00, (<b>e</b>) 16:00, and (<b>f</b>) 20:00 UTC on 1 January 2010. The blue part indicates a Moon-based platform at a position on the lunar surface where the Earth’s hemisphere cannot be observed. Conversely, the yellow part means that the Earth’s hemisphere is visible.</p>
Full article ">Figure 10
<p>Area differences on the Moon between (<b>a</b>) 00:00 and 04:00, (<b>b</b>) 00:00 and 08:00, (<b>c</b>) 00:00 and 12:00, (<b>d</b>) 00:00 and 16:00, and (<b>e</b>) 00:00 and 20:00 UTC on 1 January 2010.</p>
Full article ">Figure 11
<p>The area on the Moon where sensor equipped on can observe hemisphere of the Earth at 00:00 UTC on (<b>a</b>) 1, (<b>b</b>) 6, (<b>c</b>) 11, (<b>d</b>) 16, (<b>e</b>) 21, and (<b>f</b>) 26 January 2010.</p>
Full article ">Figure A1
<p>The lit area viewed from a Moon-based platform and Deep Space Climate Observatory (DSCOVR).</p>
Full article ">
22 pages, 7978 KiB  
Article
High-Resolution Gridded Level 3 Aerosol Optical Depth Data from MODIS
by Pawan Gupta, Lorraine A. Remer, Falguni Patadia, Robert C. Levy and Sundar A. Christopher
Remote Sens. 2020, 12(17), 2847; https://doi.org/10.3390/rs12172847 - 2 Sep 2020
Cited by 25 | Viewed by 7185
Abstract
The state-of-art satellite observations of atmospheric aerosols over the last two decades from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) instruments have been extensively utilized in climate change and air quality research and applications. The operational algorithms now produce Level 2 aerosol data at [...] Read more.
The state-of-art satellite observations of atmospheric aerosols over the last two decades from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS) instruments have been extensively utilized in climate change and air quality research and applications. The operational algorithms now produce Level 2 aerosol data at varying spatial resolutions (1, 3, and 10 km) and Level 3 data at 1 degree. The local and global applications have benefited from the coarse resolution gridded data sets (i.e., Level 3, 1 degree), as it is easier to use since data volume is low, and several online and offline tools are readily available to access and analyze the data with minimal computing resources. At the same time, researchers who require data at much finer spatial scales have to go through a challenging process of obtaining, processing, and analyzing larger volumes of data sets that require high-end computing resources and coding skills. Therefore, we created a high spatial resolution (high-resolution gridded (HRG), 0.1 × 0.1 degree) daily and monthly aerosol optical depth (AOD) product by combining two MODIS operational algorithms, namely Deep Blue (DB) and Dark Target (DT). The new HRG AODs meet the accuracy requirements of Level 2 AOD data and provide either the same or more spatial coverage on daily and monthly scales. The data sets are provided in daily and monthly files through open an Ftp server with python scripts to read and map the data. The reduced data volume with an easy to use format and tools to access the data will encourage more users to utilize the data for research and applications. Full article
(This article belongs to the Special Issue Active and Passive Remote Sensing of Aerosols and Clouds)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart showing data processing sequence (<b>a</b>), pixel, and grid orientations and list of output variables and filenames for daily and monthly data (<b>b</b>).</p>
Full article ">Figure 2
<p>Aerosol optical depth (550 nm) from Moderate Resolution Imaging Spectroradiometer (MODIS)-Aqua over Asia on December 9, 2018, at 07:30 UTC. The aerosol optical depth (AOD) data are mapped to show: TAU_ORG: original data considering varying pixel size; TAU_GRD: high resolution gridded data with no filling of empty grids; and TAU_HRG: high resolution gridded data with spatial filling at the edge of the swath.</p>
Full article ">Figure 3
<p>Aerosol optical depth (550 nm) from MODIS-Aqua over Asia on December 9, 2018, at 07:30 UTC. The AOD data are mapped to show: TAU_DT: AOD retrieved using the Dark Target (DT) algorithm; TAU_DB: AOD retrieved using the Deep Blue (DB) algorithm; TAU_DB_DT: operational DT-DB combined parameter; and TAU_HRG: high resolution gridded data combining TAU_DT and TAU_DB with spatial filling while taking pixel size in the account.</p>
Full article ">Figure 4
<p>Daily (<b>a</b>) and monthly (<b>b</b>) distribution of the number of grids in percentage with valid AODs data from MODIS-Aqua for the year 2010. Grid percentage represents available valid AODs grids out of total land (or ocean) grids for the global regions at 0.1 × 0.1-degree resolution.</p>
Full article ">Figure 5
<p>Aerosol optical depth distribution for daily (top 4 panels) and monthly (bottom 4 panels) mean data for MODIS-Aqua for the year 2010. These represent the change in global coverage with seasonal changes at daily and monthly scales. We selected January, April, July, and October as a representative month for each season.</p>
Full article ">Figure 6
<p>A two-dimensional density scatter plot of HRG AOD versus AERONET observed AOD at 0.55 µm for the global collocation data set. The top panel is for MODIS-Terra (<b>a</b>), and the bottom panel is for MODIS-Aqua (<b>b</b>). The solid line denotes the 1:1 line, and the dashed lines denote the envelope of the expected error (EE), defined by Equation (4).</p>
Full article ">Figure 7
<p>Statistics calculated from the collocation database at each Aerosol Robotic Network (AERONET) station, individually, for Terra on the left and Aqua on the right. Shown are values for correlation coefficient (R), mean bias, percentage within expected error (EE%), and RMSE. Only stations with at least 100 collocations are plotted, which may differ between the two satellites.</p>
Full article ">Figure 8
<p>AOD differences between the MODIS HRG product and AERONET for the global collocation data set, as a function of AERONET AOD, MODIS AOD. The top two rows show Terra (<b>a</b>,<b>b</b>) values (MOD04), and the bottom two rows (<b>c</b>,<b>d</b>) show Aqua (MYD04). The global data set was sorted according to AOD, binned into bins with an equal number of collocations, and then average, median, and standard deviation of each bin was calculated. Red dots and line show the mean. Black dots and line show the median. The blue cloud indicates one standard deviation of each bin. The horizontal black line denotes zero difference. Positive values indicate that MODIS AOD is higher than AERONET.</p>
Full article ">Figure 9
<p>Application of HRG AODs to access the spatial gradient in aerosols at the urban scale. The maps show HRG AODs at original resolution (0.1 × 0.1) (<b>a</b>) and resampled at reduced resolution of 0.25 (<b>b</b>), 0.5 (<b>c</b>), and 1 (<b>d</b>) degree spatial resolution. The data are averaged for July 2010. The spatial resolution becomes very critical in evaluating small spatial scale gradients, and current operational coarse resolution data may not be adequate for such analysis.</p>
Full article ">Figure 10
<p>Application of HRG AOD to monitor dust transport across the Atlantic Ocean during June 2018. Map (<b>a</b>) shows mean HRG AOD from both MODIS-Aqua and MODIS-Terra from June 18 to July 3, 2018; (<b>b</b>) same as (<b>a</b>) but at the reduced spatial resolution of 1 × 1 degree (CRG), and (<b>c</b>) the four histograms representing four 10 × 10 degree boxes on the path of dust plume as it originates over North Africa and is transported over the Atlantic Ocean and finally reaching the American continents. The Red line on the histogram is using CRG, whereas blue is from HRG AOD data.</p>
Full article ">Figure 11
<p>Timeseries of annual mean aerosol optical depth from MODIS-Terra for 14 selected regions around the globe. The top panel (<b>a</b>) shows the location of regions of the 5 × 5 degree box. The bottom panels (<b>b</b>) show the timeseries for each region, as shown on the map. The different colors on the timeseries plot represent different spatial averaging of 0.1, 0.25, 0.5, 1.0, and 5.0 degrees. The regions were selected to cover various surface types and representing different aerosol regimes.</p>
Full article ">
17 pages, 1841 KiB  
Article
Spectral Calibration Algorithm for the Geostationary Environment Monitoring Spectrometer (GEMS)
by Mina Kang, Myoung-Hwan Ahn, Xiong Liu, Ukkyo Jeong and Jhoon Kim
Remote Sens. 2020, 12(17), 2846; https://doi.org/10.3390/rs12172846 - 2 Sep 2020
Cited by 15 | Viewed by 4387
Abstract
The Geostationary Environment Monitoring Spectrometer (GEMS) onboard the Geostationary Korean Multi-Purpose Satellite 2B was successfully launched in February 2020. GEMS is a hyperspectral spectrometer measuring solar irradiance and Earth radiance in the wavelength range of 300 to 500 nm. This paper introduces the [...] Read more.
The Geostationary Environment Monitoring Spectrometer (GEMS) onboard the Geostationary Korean Multi-Purpose Satellite 2B was successfully launched in February 2020. GEMS is a hyperspectral spectrometer measuring solar irradiance and Earth radiance in the wavelength range of 300 to 500 nm. This paper introduces the spectral calibration algorithm for GEMS, which uses a nonlinear least-squares approach. Sensitivity tests for a series of unknown algorithm parameters such as spectral range for fitting, spectral response function (SRF), and reference spectrum were conducted using the synthetic GEMS spectrum prepared with the ground-measured GEMS SRF. The test results show that the required accuracy of 0.002 nm is achievable provided the SRF and the high-resolution reference spectrum are properly prepared. Such a satisfactory performance is possible mainly due to the inclusion of additional fitting parameters of spectral scales (shift, squeeze, and high order shifts) and SRF (width, shape and asymmetry). For the application to the actual GEMS data, in-orbit SRF is to be monitored using an analytic SRF function and the measured GEMS solar irradiance, while a reference spectrum is going to be selected during the instrument in-orbit test. The calibrated GEMS data is expected to be released by the end of 2020. Full article
(This article belongs to the Section Atmospheric Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow chart of Geostationary Environment Monitoring Spectrometer (GEMS) spectral calibration algorithm (SPECAL). In-flight SPECAL for GEMS consists of two processes. The first step is wavelength assignment based on the gradient of the optical bench temperature. Initial wavelengths measured during the prelaunch laboratory test are modified by considering the actual optical bench temperature during the on-orbit measurement. In the second step, calibration is performed by fitting using Fraunhofer lines in the solar spectrum. The fitting parameters such as wavelength scale and spectral response function (SRF) are updated at each iteration step until the cost function between reference spectrum and observed solar GEMS satisfies the criteria.</p>
Full article ">Figure 2
<p>(<b>a</b>) Prelaunch GEMS SRF measured at 365.0 nm during laboratory characterization. (<b>b</b>) Synthetic GEMS solar irradiance obtained by convolving a high-resolution solar reference spectrum (Chance and Kurucz, 2010) and the prelaunch SRF.</p>
Full article ">Figure 3
<p>Comparison of wavelength shifts using two polynomials. (<b>a</b>) The derived wavelength shifts for each wavelength based on fitted shift parameters (asterisks) using three different sets of fitting windows and different polynomial fitting methods. The fitting windows for each plot are as follows. Top: 10 sub-windows with 10 nm width at 10 nm intervals. Middle: six sub-windows of 30 nm width at 30 nm intervals, at 305–335, 335–365, 365–395, 395–425, 425–455, and 455–485 nm. Bottom: six sub-windows of 10 nm width spaced to include the beginning and end of the wavelength range, at 300–310, 330–340, 370–380, 400–410, 450–460, and 490–500 nm. Magenta lines are the simulated wavelength shifts applied in the tests (<span class="html-italic">C<sub>h</sub></span><sub>0</sub>: 0.01; <span class="html-italic">C<sub>h</sub></span><sub>1</sub>: 0.0001; <span class="html-italic">C<sub>h</sub></span><sub>2</sub>: 0.00002), and asterisks are fitted shifts from each sub-window. Purple and blue lines represent the derived wavelength shifts based on Chebyshev and power polynomials, respectively. (<b>b</b>) The difference between the simulated and derived wavelength shifts at each wavelength was obtained using either a power (dotted line) or Chebyshev (dashed line) polynomial. The Chebyshev polynomial gives more consistent shifts with the simulated wavelength shifts.</p>
Full article ">Figure 4
<p>Variation of SRFs with changing asymmetric Super-Gaussian (ASG) coefficients: <span class="html-italic">w</span> determines the width, and <span class="html-italic">k</span> is related to the shape of the function. When <span class="html-italic">k</span> = 2, the ASG function is identical to a Gaussian. The values of <span class="html-italic">a<sub>k</sub></span> and <span class="html-italic">a<sub>w</sub></span> for the ASG function are zero.</p>
Full article ">Figure 5
<p>Comparison of high-resolution solar reference spectra at the resolution and spectral range of GEMS. (<b>a</b>) High-resolution solar reference spectra used in this study. (SAO2010, KNMI, and QAS respectively). (<b>b</b>) The blue line represents the relative percentage difference between the QAS and KNMI spectra, and the orange line shows that between the SAO2010 and KNMI spectra. There is a remarkable difference between SAO2010 and KNMI near 300 nm (up to 8%). (<b>c</b>) The blue line represents the relative percentage difference between the QAS and SAO2010 spectra, and the orange line shows that between SAO2010 and KNMI solar irradiances.</p>
Full article ">Figure 5 Cont.
<p>Comparison of high-resolution solar reference spectra at the resolution and spectral range of GEMS. (<b>a</b>) High-resolution solar reference spectra used in this study. (SAO2010, KNMI, and QAS respectively). (<b>b</b>) The blue line represents the relative percentage difference between the QAS and KNMI spectra, and the orange line shows that between the SAO2010 and KNMI spectra. There is a remarkable difference between SAO2010 and KNMI near 300 nm (up to 8%). (<b>c</b>) The blue line represents the relative percentage difference between the QAS and SAO2010 spectra, and the orange line shows that between SAO2010 and KNMI solar irradiances.</p>
Full article ">
23 pages, 8012 KiB  
Article
Spatio-Temporal Classification Framework for Mapping Woody Vegetation from Multi-Temporal Sentinel-2 Imagery
by Jovan Kovačević, Željko Cvijetinović, Dmitar Lakušić, Nevena Kuzmanović, Jasmina Šinžar-Sekulić, Momir Mitrović, Nikola Stančić, Nenad Brodić and Dragan Mihajlović
Remote Sens. 2020, 12(17), 2845; https://doi.org/10.3390/rs12172845 - 2 Sep 2020
Cited by 7 | Viewed by 5158
Abstract
The inventory of woody vegetation is of great importance for good forest management. Advancements of remote sensing techniques have provided excellent tools for such purposes, reducing the required amount of time and labor, yet with high accuracy and the information richness. Sentinel-2 is [...] Read more.
The inventory of woody vegetation is of great importance for good forest management. Advancements of remote sensing techniques have provided excellent tools for such purposes, reducing the required amount of time and labor, yet with high accuracy and the information richness. Sentinel-2 is one of the relatively new satellite missions, whose 13 spectral bands and short revisit time proved to be very useful when it comes to forest monitoring. In this study, the novel spatio-temporal classification framework for mapping woody vegetation from Sentinel-2 multitemporal data has been proposed. The used framework is based on the probability random forest classification, where temporal information is explicitly defined in the model. Because of this, several predictions are made for each pixel of the study area, which allow for specific spatio-temporal aggregation to be performed. The proposed methodology has been successfully applied for mapping eight potential forest and shrubby vegetation types over the study area of Serbia. Several spatio-temporal aggregation approaches have been tested, divided into two main groups: pixel-based and neighborhood-based. The validation metrics show that determining the most common vegetation type classes in the neighborhood of 5 × 5 pixels provides the best results. The overall accuracy and kappa coefficient obtained from five-fold cross validation of the results are 82.97% and 0.75, respectively. The corresponding producer’s accuracies range from 36.74% to 97.99% and user’s accuracies range from 46.31% to 98.43%. The proposed methodology proved to be applicable for mapping woody vegetation in Serbia and shows a potential to be implemented in other areas as well. Further testing is necessary to confirm such assumptions. Full article
(This article belongs to the Special Issue Mapping Tree Species Diversity)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of study area—Republic of Serbia.</p>
Full article ">Figure 2
<p>The used Sentinel-2 granules over the study area: (<b>a</b>) Timestamps the granules; (<b>b</b>) Number of Sentinel-2 observations available over the study area.</p>
Full article ">Figure 3
<p>The overview of created woody vegetation polygons’ locations over the study area.</p>
Full article ">Figure 4
<p>The flowchart diagram of the proposed spatio-temporal classification framework.</p>
Full article ">Figure 5
<p>Producer’s accuracy and user’s accuracy barplots after K-fold leave-location out cross-validation (<b>a</b>) PA with K = 5 (<b>b</b>) PA with K = 10 (<b>c</b>) UA with K = 5 (<b>d</b>) UA with K = 10.</p>
Full article ">Figure 6
<p>Final classification map of woody vegetation coverage over the study area.</p>
Full article ">Figure 7
<p>Temporal phenological patterns for the used Sentinel-2 spectral bands and NDVI values.</p>
Full article ">Figure 8
<p>Variable importance barplot expressed through the Mean decrease Gini measure.</p>
Full article ">
20 pages, 9153 KiB  
Article
A Dual-Wavelength Ocean Lidar for Vertical Profiling of Oceanic Backscatter and Attenuation
by Kaipeng Li, Yan He, Jian Ma, Zhengyang Jiang, Chunhe Hou, Weibiao Chen, Xiaolei Zhu, Peng Chen, Junwu Tang, Songhua Wu, Fanghua Liu, Yuan Luo, Yufei Zhang and Yongqiang Chen
Remote Sens. 2020, 12(17), 2844; https://doi.org/10.3390/rs12172844 - 1 Sep 2020
Cited by 35 | Viewed by 5554
Abstract
Ocean water column information profiles are essential for ocean research. Currently, water column profiles are typically obtained by ocean lidar instruments, including spaceborne, airborne and shipborne lidar, most of which are equipped with a 532 nm laser; however, blue wavelength penetrates more for [...] Read more.
Ocean water column information profiles are essential for ocean research. Currently, water column profiles are typically obtained by ocean lidar instruments, including spaceborne, airborne and shipborne lidar, most of which are equipped with a 532 nm laser; however, blue wavelength penetrates more for open ocean detection. In this paper, we present a novel airborne dual-wavelength ocean lidar (DWOL), equipped with a 532 and 486 nm laser that can operate simultaneously. This instrument was designed to compare the performance of 486 and 532 nm lasers in a single detection area and to provide a reference for future spaceborne oceanic lidar (SBOL) design. Airborne and shipborne experiments were conducted in the South China Sea. Results show that—for a 500-frame accumulation—the 486 nm channel obtained volume profiles from a depth of approximately 100 m. In contrast, the vertical profiles obtained by the 532 nm channel only reached in a depth of 75 m, which was approximately 25% less than that of 486 m channel in the same detection area. Results from the inverse lidar attenuation coefficient α(z) for the DWOL show that the maximum value of α(z) ranged from 40 to 80 m, which was consistent with the chlorophyll-scattering layer (CSL) distribution measured by the shipborne instrument. Additionally, α486(z) decreased for depth beyond 80 m, indicating that the 486 nm laser can potentially penetrate the entire CSL. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Schematic diagram of the dual-wavelength ocean lidar (DWOL) system. TM—transmitting mirror; AAS—adjustable aperture stop; CL—collimating lens; LDM—long-pass dichroic mirror; FL—focusing lens; PMT—photomultiplier tube; FFC—flexible flat cable.</p>
Full article ">Figure 2
<p>Schematic diagram of the 486 and 532 nm lasers.</p>
Full article ">Figure 3
<p>Experiments in the South China Sea. (<b>a</b>) Distributed flight track and selected in situ sites; (<b>b</b>) photograph of the DWOL system in airborne experiment.</p>
Full article ">Figure 4
<p>(<b>a</b>) Data for a single frame acquired by the acquisition board. Data obtained near in situ site <span class="html-italic">G3</span>, for a photomultiplier tube (PMT) control voltage of 0.95-V and a plane height of 2100 m; (<b>b</b>) average result for 100 frames of data acquired within 1 s near <span class="html-italic">G3</span>. Data acquired after 1000 ns were used as a curve-fit for the detector system baseline; (<b>c</b>) curve-fit results for the baselines of 532- and 486-nm channels. R-square values for 532- and 486 nm channels are 0.9661 and 0.9198, respectively; (<b>d</b>) dark count statistics for 532 and 486 nm channels, which were obtained without the laser.</p>
Full article ">Figure 5
<p>Diagram of the after-pulse count calibration system (APCCS).</p>
Full article ">Figure 6
<p>Distribution of the APC from attenuation of 0 to −40 dB. (<b>a</b>) attenuation of 0 dB; (<b>b</b>) attenuation of −5 dB; (<b>c</b>) attenuation of −10 dB; (<b>d</b>) attenuation of −15 dB; (<b>e</b>) attenuation of −20 dB; (<b>f</b>) attenuation of −25 dB; (<b>g</b>) attenuation of −30 dB; (<b>h</b>) attenuation of −35 dB; (<b>i</b>) attenuation of −40 dB.</p>
Full article ">Figure 7
<p>(<b>a</b>) Diagram of the non-saturated value reference (NSVR) method; (<b>b</b>) distribution of sites for capturing characteristic data; (<b>c</b>) non-saturated data for the 532- and 486 nm channels.</p>
Full article ">Figure 8
<p>(<b>a</b>) Change in weight with water depth for 532 nm, relative to the reference depth; (<b>b</b>) change in weight with water depth for 486 nm relative to the reference depth; (<b>c</b>) attenuation of weight from the surface to the reference depth for the 532 nm channel; (<b>d</b>) attenuation of weight from the surface to the reference depth for the 486 nm channel.</p>
Full article ">Figure 9
<p>Selection of reference depths for the 532 and 486 nm channels.</p>
Full article ">Figure 10
<p>Diagram of data process.</p>
Full article ">Figure 11
<p>Chlorophyll concentration at <span class="html-italic">G1</span>, <span class="html-italic">G2</span>, <span class="html-italic">B3</span> and <span class="html-italic">C2</span>, measured in the shipborne experiment using the RBR XR-420. (<b>a</b>) chlorophyll concentration at <span class="html-italic">G1</span>; (<b>b</b>) chlorophyll concentration at <span class="html-italic">G2</span>; (<b>c</b>) chlorophyll concentration at <span class="html-italic">B3</span>; (<b>d</b>) chlorophyll concentration at <span class="html-italic">C2</span>.</p>
Full article ">Figure 12
<p>Diagram of in situ profile chlorophyll processing with Monte Carlo (MC) simulations.</p>
Full article ">Figure 13
<p>(<b>a</b>) Position of selected data; (<b>b</b>) one frame of data from the selected dataset.</p>
Full article ">Figure 14
<p>(<b>a</b>) Results of received photons for 500 accumulated frames without any corrections, (<b>b</b>) result of received photons, with the total dark count (TDC) and the after-pulse count (APC) removed; (<b>c</b>) comparison between the original results and the corrected results.</p>
Full article ">Figure 15
<p>(<b>a</b>–<b>c</b>) Comparison between the simulation results and the airborne results at altitude of 2000 m; (<b>d</b>) comparison between the simulation results and the airborne results at altitude of 2500 m.</p>
Full article ">Figure 16
<p>Signal-to-noise ratio (SNR) of the receiving signal photons. (<b>a</b>) SNR of <span class="html-italic">G1</span>; (<b>b</b>) SNR of <span class="html-italic">G2</span>; (<b>c</b>) SNR of <span class="html-italic">B3</span>; (<b>d</b>) SNR of <span class="html-italic">C2</span>. The SNR threshold of detection for our DWOL system is 2. The corresponding depth indicated by the red lines were obtained based on the median values of depth when the SNR was lower than 2.</p>
Full article ">Figure 17
<p>Distribution of lidar attenuation <math display="inline"><semantics> <mrow> <mi>α</mi> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from G1 to C2. (<b>a</b>) Distribution of lidar attenuation at <span class="html-italic">G1</span>; (<b>b</b>) Distribution of lidar attenuation at <span class="html-italic">G2</span>; (<b>c</b>) Distribution of lidar attenuation at <span class="html-italic">B3</span>; (<b>d</b>) Distribution of lidar attenuation at <span class="html-italic">C2</span>; To enhance the stability of the inverted results, the range-corrected returning signals <math display="inline"><semantics> <mrow> <mi>S</mi> <msup> <mrow/> <mo>′</mo> </msup> </mrow> </semantics></math> were smoothed by applying a moving average with a 41-ns span. The magenta curves were obtained by a secondary smoothing of the inverse results using the Savity–Golay method with a 51-ns span.</p>
Full article ">Figure 18
<p>(<b>a</b>) Distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>532</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-A to Selected-B; (<b>b</b>) distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>486</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-A to Selected-B; (c) distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>532</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-C to Selected-D; (<b>d</b>) distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>486</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-C to Selected-D.</p>
Full article ">Figure 19
<p>(<b>a</b>) Distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>532</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-A to Selected-B; (<b>b</b>) distribution of from Selected-A to Selected-B; (<b>c</b>) distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>532</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-C to Selected-D; (<b>d</b>) distribution of <math display="inline"><semantics> <mrow> <msub> <mi>α</mi> <mrow> <mn>486</mn> </mrow> </msub> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> from Selected-C to Selected-D. Value of <math display="inline"><semantics> <mrow> <mi>α</mi> <mo stretchy="false">(</mo> <mi>z</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> limited from 0 to 0.2 for better visualization.</p>
Full article ">
16 pages, 6366 KiB  
Article
Thermal Infrared and Ionospheric Anomalies of the 2017 Mw6.5 Jiuzhaigou Earthquake
by Meijiao Zhong, Xinjian Shan, Xuemin Zhang, Chunyan Qu, Xiao Guo and Zhonghu Jiao
Remote Sens. 2020, 12(17), 2843; https://doi.org/10.3390/rs12172843 - 1 Sep 2020
Cited by 18 | Viewed by 3106
Abstract
Taking the 2017 Mw6.5 Jiuzhaigou earthquake as a case study, ionospheric disturbances (i.e., total electron content and TEC) and thermal infrared (TIR) anomalies were simultaneously investigated. The characteristics of the temperature of brightness blackbody (TBB), medium-wave infrared brightness (MIB), and outgoing [...] Read more.
Taking the 2017 Mw6.5 Jiuzhaigou earthquake as a case study, ionospheric disturbances (i.e., total electron content and TEC) and thermal infrared (TIR) anomalies were simultaneously investigated. The characteristics of the temperature of brightness blackbody (TBB), medium-wave infrared brightness (MIB), and outgoing longwave radiation (OLR) were extracted and compared with the characteristics of ionospheric TEC. We observed different relationships among the three types of TIR radiation according to seismic or aseismic conditions. A wide range of positive TEC anomalies occurred southern to the epicenter. The area to the south of the Huarong mountain fracture, which contained the maximum TEC anomaly amplitudes, overlapped one of the regions with notable TIR anomalies. We observed three stages of increasing TIR radiation, with ionospheric TEC anomalies appearing after each stage, for the first time. There was also high spatial correspondence between both TIR and TEC anomalies and the regional geological structure. Together with the time series data, these results suggest that TEC anomaly genesis might be related to increasing TIR. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Results of wavelet transformation of the temperature of brightness blackbody (TBB). (<b>a</b>) Original record of brightness temperature; (<b>b</b>) result with second-order scale analysis; (<b>c</b>) details of the second-order wavelet; (<b>d</b>) results with seventh-order scale analysis; and (<b>e</b>) difference between second-order scale analysis and seventh-order scale analysis.</p>
Full article ">Figure 2
<p>Distribution of Global Navigation Satellite System (GNSS) base stations in the eight provinces around the 2017 <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>w</mi> </msub> </mrow> </semantics></math>6.5 Jiuzhaigou earthquake. Black triangles show the locations of GNSS base stations, and the red star shows the epicenter location.</p>
Full article ">Figure 3
<p>Evolution of (<b>a</b>) medium-wave infrared brightness (MIB), (<b>b</b>) temperature of brightness blackbody (TBB), and (<b>c</b>) outgoing longwave radiation (OLR) anomalies from 8 July to 6 September 2017. The black star shows the epicenter location.</p>
Full article ">Figure 3 Cont.
<p>Evolution of (<b>a</b>) medium-wave infrared brightness (MIB), (<b>b</b>) temperature of brightness blackbody (TBB), and (<b>c</b>) outgoing longwave radiation (OLR) anomalies from 8 July to 6 September 2017. The black star shows the epicenter location.</p>
Full article ">Figure 4
<p>Temperature of brightness blackbody (TBB) anomaly on 7 August 2017 (Region 1: Yingxiu-Beichuan fault and Region 2: Huarong mountain fault).</p>
Full article ">Figure 5
<p>Power spectra time series for full frequency bands in the study region from 1 January 2014 to 30 September 2017. Power spectra of (<b>a</b>) temperature of brightness blackbody (TBB), (<b>b</b>) outgoing longwave radiation (OLR), and (<b>c</b>) medium-wave infrared brightness (MIB) in Region 1; power spectra of (<b>d</b>) TBB, (<b>e</b>) OLR, and (<b>f</b>) MIB in Region 2; power spectra of (<b>g</b>) TBB, (<b>h</b>) OLR, and (<b>i</b>) MIB in the epicentral region.</p>
Full article ">Figure 5 Cont.
<p>Power spectra time series for full frequency bands in the study region from 1 January 2014 to 30 September 2017. Power spectra of (<b>a</b>) temperature of brightness blackbody (TBB), (<b>b</b>) outgoing longwave radiation (OLR), and (<b>c</b>) medium-wave infrared brightness (MIB) in Region 1; power spectra of (<b>d</b>) TBB, (<b>e</b>) OLR, and (<b>f</b>) MIB in Region 2; power spectra of (<b>g</b>) TBB, (<b>h</b>) OLR, and (<b>i</b>) MIB in the epicentral region.</p>
Full article ">Figure 6
<p>Power spectra time series of temperature of brightness blackbody (TBB; blue lines), outgoing longwave radiation (OLR; green lines), and medium-wave infrared brightness (MIB; orange lines) showing thermal radiation anomalies in (<b>a</b>) Region 1 and (<b>b</b>) Region 2 and showing periods with no thermal radiation anomalies in (<b>c</b>) Region 1, (<b>d</b>) Region 2, and (<b>e</b>) the epicenter region. Red arrows denote the day of the 2017 <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>w</mi> </msub> </mrow> </semantics></math>6.5 Jiuzhaigou earthquake.</p>
Full article ">Figure 7
<p>Total electron content (TEC) anomalies through sliding quarterback standard deviation of (<b>a</b>) seismic anomalies at the GZGY, SCJU, and LUZH stations and (<b>b</b>) aseismic anomalies at the NXYC, GSJY, and QHGC stations. Red thick lines denote observation data, blue lines show the mean curve, purple lines denote the threshold range of 2σ, and red fine lines denote TEC anomalies.</p>
Full article ">Figure 8
<p>Spatial distribution of total electron content (TEC) anomalies on 8 August 2017. Black triangles show the locations of Global Navigation Satellite System (GNSS) base stations, and the red five-pointed star shows the epicenter location of the 2017 <math display="inline"><semantics> <mrow> <msub> <mi>M</mi> <mi>w</mi> </msub> </mrow> </semantics></math>6.5 Jiuzhaigou earthquake.</p>
Full article ">Figure 9
<p>Increasing trends of thermal radiation (mean power spectrum of temperature of brightness blackbody (TBB), outgoing longwave radiation (OLR), and medium-wave infrared brightness (MIB)) and total electron content (TEC). Red arrows denote phases of increasing thermal radiation; purple arrows denote TEC disturbances.</p>
Full article ">
17 pages, 71904 KiB  
Technical Note
Very Local Subsidence Near the Hot Spring Region in Hakone Volcano, Japan, Inferred from InSAR Time Series Analysis of ALOS/PALSAR Data
by Ryosuke Doke, George Kikugawa and Kazuhiro Itadera
Remote Sens. 2020, 12(17), 2842; https://doi.org/10.3390/rs12172842 - 1 Sep 2020
Cited by 14 | Viewed by 3868 | Correction
Abstract
Monitoring of surface displacement by satellite-based interferometric synthetic aperture radar (InSAR) analysis is an effective method for detecting land subsidence in areas where routes of leveling measurements are undeveloped, such as mountainous areas. In particular, InSAR-based monitoring around well-developed hot spring resorts, such [...] Read more.
Monitoring of surface displacement by satellite-based interferometric synthetic aperture radar (InSAR) analysis is an effective method for detecting land subsidence in areas where routes of leveling measurements are undeveloped, such as mountainous areas. In particular, InSAR-based monitoring around well-developed hot spring resorts, such as those in Japan, is useful for conserving hot spring resources. Hakone Volcano is one of the major hot spring resorts in Japan, and many hot spring wells have been developed in the Owakudani fumarole area, where a small phreatic eruption occurred in 2015. In this study, we performed an InSAR time series analysis using the small baseline subset (SBAS) method and ALOS/PALSAR scenes of the Hakone Volcano to monitor surface displacements around the volcano. The results of the SBAS-InSAR time series analysis show highly localized subsidence to the west of Owakudani from 2006–2011 when the ALOS/PALSAR satellite was operated. The area of subsidence was approximately 500 m in diameter, and the peak rate of subsidence was approximately 25 mm/year. Modeling using a point pressure source suggested that the subsidence was caused by a contraction at approximately 700 m above sea level (about 300 m below the ground surface). The rate of this contraction was estimated to be 1.04 × 104 m3/year. Hot spring water is collected from a nearby well at almost the same depth as the contraction source, and its main dissolved ion component is chloride ions, suggesting that the hydrothermal fluids are supplied from deep within the volcano. The land subsidence suggests that the fumarole activity is attenuating due to a decrease in the supply of hydrothermal fluids from deeper areas. Full article
(This article belongs to the Special Issue Monitoring Land Subsidence Using Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map showing the area around Hakone Volcano. The topographical map is based on 50 m mesh height data released by the Geospatial Information Authority of Japan (GSI). The red and blue rectangles show the area included in the scenes of the ascending and descending orbits, respectively. Red points show the location of ground control points used for the analysis.</p>
Full article ">Figure 2
<p>Geological map around Hakone Volcano (modified from the Seamless Geological map of Japan [<a href="#B6-remotesensing-12-02842" class="html-bibr">6</a>]). The original geological map was simplified based on ages and types of rocks. Contour lines in which intervals are 100 m were extracted from 10 m mesh height data released by the GSI. The area enclosed by the rectangle indicates the focus area of this study (<a href="#remotesensing-12-02842-f003" class="html-fig">Figure 3</a>).</p>
Full article ">Figure 3
<p>Location map of the focused area of this study. The base map is a true-color image captured by ALOS/AVNIR-2 on 10 November 2006. Contour lines in which intervals are 50 m were extracted from 10 m mesh height data released by the GSI.</p>
Full article ">Figure 4
<p>Temporal and spatial baselines for the SBAS-InSAR analysis of ALOS/PALSAR data from the (<b>a</b>) ascending orbit, and the (<b>b</b>) descending orbit. Red points show the super primary scenes used for the analysis, which the software selected as the scene with the highest number of connections to other scenes.</p>
Full article ">Figure 5
<p>Distribution of line-of-sight (LOS) velocity in the ascending orbit for the (<b>a</b>) entire region; (<b>b</b>) area of interest shown in the black rectangle (inset) in (<b>a</b>). The base map of (<b>a</b>) is a topographical map based on 10 m mesh height data released by the GSI, and its color scale is the same as <a href="#remotesensing-12-02842-f001" class="html-fig">Figure 1</a>. Intervals of contour lines in (<b>b</b>) are 50 m in height.</p>
Full article ">Figure 6
<p>Distribution of LOS velocity in the descending orbit for the (<b>a</b>) entire region; (<b>b</b>) area of interest shown in the black rectangle (inset) in (<b>a</b>). The base map of (<b>a</b>) is a topographical map based on 10 m mesh height data released by the GSI, and its color scale is the same as <a href="#remotesensing-12-02842-f001" class="html-fig">Figure 1</a>. Intervals of contour lines in (<b>b</b>) are 50 m in height.</p>
Full article ">Figure 7
<p>Distribution of quasi-eastward velocity over the (<b>a</b>) entire region; (<b>b</b>) area of interest shown in the black rectangle (inset) in (<b>a</b>). The base map of (<b>a</b>) is a topographical map based on 10 m mesh height data released by the GSI, and its color scale is the same as <a href="#remotesensing-12-02842-f001" class="html-fig">Figure 1</a>. Intervals of contour lines in (<b>b</b>) are 50 m in height.</p>
Full article ">Figure 8
<p>Distribution of quasi-upward velocity over the (<b>a</b>) entire region; (<b>b</b>) area of interest shown in the black rectangle (inset) in (<b>a</b>). The base map of (<b>a</b>) is a topographical map based on 10 m mesh height data released by the GSI, and its color scale is the same as <a href="#remotesensing-12-02842-f001" class="html-fig">Figure 1</a>. Intervals of contour lines in (<b>b</b>) are 50 m in height.</p>
Full article ">Figure 9
<p>Time variation of displacement in pixels where the maximum velocities were observed to the west of Owakudani. Displacements were obtained by subtracting the displacement of an arbitrarily selected reference point near Sengokuhara (the green points in <a href="#remotesensing-12-02842-f005" class="html-fig">Figure 5</a>b and <a href="#remotesensing-12-02842-f006" class="html-fig">Figure 6</a>b) to remove the displacement around the Hakone Volcano.</p>
Full article ">Figure 10
<p>Results of the inversion analysis for estimating the point pressure source beneath the ground surface to the west of Owakudani: (<b>a</b>) observed velocity in the ascending orbit; (<b>b</b>) simulated velocity in the ascending orbit; (<b>c</b>) observed velocity in the descending orbit; (<b>d</b>) simulated velocity in the descending orbit. Contour lines in (<b>a</b>,<b>c</b>), in which intervals are 25 m, were extracted from 10 m mesh height data released by the GSI. The yellow “x” marks the location of the point pressure source, the parameters of which are shown in <a href="#remotesensing-12-02842-t002" class="html-table">Table 2</a>. Estimated offsets for the ascending and descending orbits are +0.15 and +2.43 mm/year, respectively. The color scales used for the velocity are the same as those in <a href="#remotesensing-12-02842-f005" class="html-fig">Figure 5</a> and <a href="#remotesensing-12-02842-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 11
<p>Residual root mean square (RMS) values estimated by forward modeling and varying (<b>a</b>) altitude and (<b>b</b>) volume variation. Altitude and volume variation changes every 50 m and 2000 m<sup>3</sup>/year, respectively. During forward modeling, the other parameters were held at optimum values (<a href="#remotesensing-12-02842-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 12
<p>Characteristics of the hot spring wells to the west of Owakudani as (<b>a</b>) a horizontal distribution of hot spring wells (orange-colored points) on the map of quasi-upward velocity; (<b>b</b>) cross-section indicated by A-A’ in (<b>a</b>). Contour lines in (<b>a</b>), in which intervals are 100 m, were extracted from 10 m mesh height data released by the GSI. MO (Motohakone) and SE (Sengokuhara) indicate the numbers of hot spring wells managed by the Odawara Health and Welfare Office. Gray and red colors in (<b>b</b>) show the extent of cased and uncased parts of the borehole, respectively. MO4 is a natural hot spring at the surface.</p>
Full article ">Figure 13
<p>Triangular plot of major anions in hot spring water from wells to the west of Owakudani. The numbers in the triangular plot indicate the composition (%) of each ion. The colored circles show the chloride ion content in the hot spring water. The numbers of the hot spring wells and their locations are shown in <a href="#remotesensing-12-02842-f012" class="html-fig">Figure 12</a>.</p>
Full article ">Figure 14
<p>False-color image around the Owakudani fumarole area captured by ALOS/AVNIR-2 on 31 January 2007. The red color in the image indicates areas of vegetation, and in the Owakudani fumarole area, there is no vegetation due to geothermal and fumarolic activity.</p>
Full article ">
26 pages, 7507 KiB  
Article
Investigations into Synoptic Spatiotemporal Characteristics of Coastal Upper Ocean Circulation Using High Frequency Radar Data and Model Output
by Lei Ren, Nanyang Chu, Zhan Hu and Michael Hartnett
Remote Sens. 2020, 12(17), 2841; https://doi.org/10.3390/rs12172841 - 1 Sep 2020
Cited by 7 | Viewed by 2894
Abstract
Numerical models and remote sensing observation systems such as radars are useful for providing information on surface flows for coastal areas. Evaluation of their performance and extracting synoptic characteristics are challenging and important tasks. This research aims to investigate synoptic characteristics of surface [...] Read more.
Numerical models and remote sensing observation systems such as radars are useful for providing information on surface flows for coastal areas. Evaluation of their performance and extracting synoptic characteristics are challenging and important tasks. This research aims to investigate synoptic characteristics of surface flow fields through undertaking a detailed analysis of model results and high frequency radar (HFR) data using self-organizing map (SOM) and empirical orthogonal function (EOF) analysis. A dataset of surface flow fields over thirteen days from these two sources was used. A SOM topology map of size 4 × 3 was developed to explore spatial patterns of surface flows. Additionally, comparisons of surface flow patterns between SOM and EOF analysis were carried out. Results illustrate that both SOM and EOF analysis methods are valuable tools for extracting characteristic surface current patterns. Comparisons indicated that the SOM technique displays synoptic characteristics of surface flow fields in a more detailed way than EOF analysis. Extracted synoptic surface current patterns are useful in a variety of applications, such as oil spill treatment and search and rescue. This research provides an approach to using powerful tools to diagnose ocean processes from different aspects. Moreover, it is of great significance to assess SOM as a potential forecasting tool for coastal surface currents. Full article
(This article belongs to the Special Issue Coastal Waters Monitoring Using Remote Sensing Technology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Deployment of the high frequency radar (HFR) system (C1 and C2 indicate deployment location of HFR station).</p>
Full article ">Figure 2
<p>Architecture of a <math display="inline"> <semantics> <mrow> <mn>4</mn> <mo>×</mo> <mn>3</mn> </mrow> </semantics> </math> self-organizing map (SOM) network.</p>
Full article ">Figure 3
<p>Characteristic spatial patterns of surface currents from model results extracted by a 4 × 3 SOM analysis (subfigures (<b>a</b>–<b>l</b>) indicates twelve spatial SOM patterns, respectively; the occurrence frequency is given as a percentage number for each pattern at the topleft).</p>
Full article ">Figure 3 Cont.
<p>Characteristic spatial patterns of surface currents from model results extracted by a 4 × 3 SOM analysis (subfigures (<b>a</b>–<b>l</b>) indicates twelve spatial SOM patterns, respectively; the occurrence frequency is given as a percentage number for each pattern at the topleft).</p>
Full article ">Figure 4
<p>Characteristic spatial patterns of surface currents from radar extracted by a 4 × 3 SOM analysis (subfigures (<b>a</b>–<b>l</b>) indicates twelve spatial SOM patterns, respectively; the occurrence frequency is given as a percentage number for each pattern at the topleft).</p>
Full article ">Figure 4 Cont.
<p>Characteristic spatial patterns of surface currents from radar extracted by a 4 × 3 SOM analysis (subfigures (<b>a</b>–<b>l</b>) indicates twelve spatial SOM patterns, respectively; the occurrence frequency is given as a percentage number for each pattern at the topleft).</p>
Full article ">Figure 5
<p>Wind roses during analysis period. (Locations of W1–W6 are shown in subfigures (<b>a</b>–<b>f</b>), respectively; direction indicates wind blowing from).</p>
Full article ">Figure 6
<p>Mean ECMWF (European Centre for Medium-Range Weather Forecasts) wind vectors during analysis period.</p>
Full article ">Figure 7
<p>Time series of best match unit (<span class="html-italic">BMU</span>) corresponding to 12 SOM patterns ((<b>a</b>) model results and (<b>b</b>) radar data).</p>
Full article ">Figure 8
<p>Spatial empirical orthogonal function (EOF) modes of model results (subfigures (<b>a</b>–<b>f</b>) indicates the first to the sixth EOF eigenvector modes, respectively).</p>
Full article ">Figure 9
<p>Spatial EOF modes of radar data (subfigures (<b>a</b>–<b>f</b>) indicates the first to the sixth EOF eigenvector modes, respectively).</p>
Full article ">Figure 10
<p>Variance of EOF modes.</p>
Full article ">Figure 11
<p>The first three EOF principal components (PCs): (<b>a</b>) model results and (<b>b</b>) HFR data.</p>
Full article ">Figure 12
<p>Spectral analysis of EOF PCs between model results and radar data ((<b>a</b>–<b>c</b>) indicate spectral analysis for EOF PC1, PC2, and PC3, respectively).</p>
Full article ">
10 pages, 2853 KiB  
Technical Note
Highly Local Model Calibration with a New GEDI LiDAR Asset on Google Earth Engine Reduces Landsat Forest Height Signal Saturation
by Sean P. Healey, Zhiqiang Yang, Noel Gorelick and Simon Ilyushchenko
Remote Sens. 2020, 12(17), 2840; https://doi.org/10.3390/rs12172840 - 1 Sep 2020
Cited by 56 | Viewed by 11603
Abstract
While Landsat has proved to be effective for monitoring many elements of forest condition and change, the platform has well-documented limitations in measuring forest structure, the vertical distribution of the canopy. This is important because structure determines several key ecosystem functions, including: carbon [...] Read more.
While Landsat has proved to be effective for monitoring many elements of forest condition and change, the platform has well-documented limitations in measuring forest structure, the vertical distribution of the canopy. This is important because structure determines several key ecosystem functions, including: carbon storage; habitat suitability; and timber volume. Canopy structure is directly measured by LiDAR, and it should be possible to train Landsat structure models at a highly local scale with the dense, global sample of full waveform LiDAR observations collected by NASA’s Global Ecosystem Dynamics Investigation (GEDI). Local models are expected to perform better because: (a) such models may take advantage of localized correlations between structure and canopy surface reflectance; and (b) to the extent that models revert to the mean of the calibration data due to a lack of discrimination, local models will revert to a more representative mean. We tested Landsat-based relative height predictions using a new GEDI asset on Google Earth Engine, described here. Mean prediction error declined by 23% and important prediction biases at the extremes of the range of canopy height dropped as model calibration became more local, minimizing forest structure signal saturation commonly associated with Landsat and other passive optical sensors. Our results suggest that Landsat-based maps of structural variables such as height and biomass may substantially benefit from the kind of local calibration that GEDI’s dense sample of LiDAR data supports. Full article
(This article belongs to the Special Issue Lidar Remote Sensing of Forest Structure, Biomass and Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Individual raster boundaries making up the GEDI GEE image collection. The area of each raster is displayed with the number of high-quality shots (according to the <span class="html-italic">quality</span> variable) that were publicly available at the time of this publication.</p>
Full article ">Figure 2
<p>Location of 7615 focal sites (in green) used for the cross-scale height model comparison</p>
Full article ">Figure 3
<p>Predicted vs observed GEDI rh98 values for 7615 focal sites, with site-specific models trained for each plot using calibration from 100 neighbors within: (<b>a</b>) a 3 km radius; (<b>b</b>) a 30 km radius; and (<b>c</b>) the same continent.</p>
Full article ">Figure 4
<p>Predicted vs. observed GEDI rh98 values (in meters) for models created at three scales across the five plant functional types mapped by [<a href="#B28-remotesensing-12-02840" class="html-bibr">28</a>]: Deciduous broadleaf trees (DBT; 16% of the sample); Deciduous needleleaf trees (DNT; 1%); Evergreen Broadleaf Trees (EBT; 50%); Evergreen Needleleaf Trees (ENT; 16%); and Grass/Shrubs (GS; 17%). The RMSE of models at each scale are given in white.</p>
Full article ">
29 pages, 24498 KiB  
Article
Multi-Hazard and Spatial Transferability of a CNN for Automated Building Damage Assessment
by Tinka Valentijn, Jacopo Margutti, Marc van den Homberg and Jorma Laaksonen
Remote Sens. 2020, 12(17), 2839; https://doi.org/10.3390/rs12172839 - 1 Sep 2020
Cited by 48 | Viewed by 6852
Abstract
Automated classification of building damage in remote sensing images enables the rapid and spatially extensive assessment of the impact of natural hazards, thus speeding up emergency response efforts. Convolutional neural networks (CNNs) can reach good performance on such a task in experimental settings. [...] Read more.
Automated classification of building damage in remote sensing images enables the rapid and spatially extensive assessment of the impact of natural hazards, thus speeding up emergency response efforts. Convolutional neural networks (CNNs) can reach good performance on such a task in experimental settings. How CNNs perform when applied under operational emergency conditions, with unseen data and time constraints, is not well studied. This study focuses on the applicability of a CNN-based model in such scenarios. We performed experiments on 13 disasters that differ in natural hazard type, geographical location, and image parameters. The types of natural hazards were hurricanes, tornadoes, floods, tsunamis, and volcanic eruptions, which struck across North America, Central America, and Asia. We used 175,289 buildings from the xBD dataset, which contains human-annotated multiclass damage labels on high-resolution satellite imagery with red, green, and blue (RGB) bands. First, our experiments showed that the performance in terms of area under the curve does not correlate with the type of natural hazard, geographical region, and satellite parameters such as the off-nadir angle. Second, while performance differed highly between occurrences of disasters, our model still reached a high level of performance without using any labeled data of the test disaster during training. This provides the first evidence that such a model can be effectively applied under operational conditions, where labeled damage data of the disaster cannot be available timely and thus model (re-)training is not an option. Full article
(This article belongs to the Special Issue Remote Sensing for Environment and Disaster)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of DNAs during emergency response operations and the possible role of an automated building damage classification model. Black: items/data; blue: processes; purple: events. Full lines: necessary steps; dashed lines: optional steps.</p>
Full article ">Figure 2
<p>Architecture of the used model. The numbers in the squares indicate the input and output size of each block, where N equals the number of damage classes.</p>
Full article ">Figure 3
<p>Examples of challenges in the data. The type of challenge is indicated by the subcaption. (<b>a</b>) Cloud cover; (<b>b</b>) misaligned building polygons; (<b>c</b>) building with two different damage labels; (<b>d</b>) difference in illumination on before (<b>left</b>) and after (<b>right</b>) images.</p>
Full article ">Figure 4
<p>Before and after imagery and damage labels of part of the areas that were impacted by the Joplin tornado (<b>top row</b>) and the Nepal flooding (<b>bottom row</b>). The colors of the damage labels in figures <span class="html-italic">c</span> and <span class="html-italic">f</span> indicate the different degrees of damage, where green equals <span class="html-italic">no damage</span>, yellow <span class="html-italic">minor damage</span>, orange <span class="html-italic">major damage</span>, and red <span class="html-italic">destroyed</span>.</p>
Full article ">Figure 5
<p>Examples of individual buildings after the disaster impacted by the Joplin tornado (<b>a</b>–<b>d</b>) and the Nepal flooding (<b>e</b>–<b>h</b>). The subcaptions indicate the damage class.</p>
Full article ">Figure 6
<p>Confusion matrices of the model tested on the Nepal flooding (<b>left</b>) and the Joplin tornado (<b>right</b>). Both models were trained on 80% of the data, and tested on 10%.</p>
Full article ">Figure 7
<p>Four examples of buildings in the test set of the Nepal flooding that were misclassified by the model. For each building, the before image is shown on the left and the after image on the right. (<b>a</b>) Label: <span class="html-italic">major damage</span>; prediction: <span class="html-italic">no damage</span>; (<b>b</b>) Label: <span class="html-italic">major damage</span>; prediction: <span class="html-italic">no damage</span>; (<b>c</b>) Label: <span class="html-italic">no damage</span>; prediction: <span class="html-italic">destroyed</span>; (<b>d</b>) Label: <span class="html-italic">no damage</span>; prediction: <span class="html-italic">major damage</span>.</p>
Full article ">Figure 8
<p>Four examples of buildings in the test set of the Joplin tornado that were misclassified by the model. For each building, the before image is shown on the left and the after image on the right. (<b>a</b>) Label: <span class="html-italic">minor damage</span>; prediction: <span class="html-italic">no damage</span>; (<b>b</b>) Label: <span class="html-italic">major damage</span>; prediction: <span class="html-italic">no damage</span>; (<b>c</b>) Label: <span class="html-italic">no damage</span>; prediction: <span class="html-italic">destroyed</span>; (<b>d</b>) Label: <span class="html-italic">no damage</span>; prediction: <span class="html-italic">destroyed</span>.</p>
Full article ">Figure 9
<p>Scatter plot of the percentage of data points belonging to a class versus the recall of that class for each of the 13 tested disasters. The blue line shows the best polynomial fit and the blue area the 95% confidence interval.</p>
Full article ">Figure 10
<p>Distribution plot of the building footprint for buildings up to 700 m<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math>, i.e., 95% of the buildings. The lines correspond to the distributions over correctly and incorrectly classified samples.</p>
Full article ">Figure 11
<p>Scatter plots of the value of the parameter versus the AUC. One dot belongs to one disaster.</p>
Full article ">Figure 12
<p>Scatters plot of the value of the parameter versus the AUC. One dot belongs to one pre-disaster–post-disaster satellite image pair. The <span class="html-italic">x</span>-axis of the left column represents the sum of the parameter over the pre and post images. In the right column, the <span class="html-italic">x</span>-axis represents the absolute difference in the parameter value between the pre and post images.</p>
Full article ">Figure 13
<p>Confusion matrix of the model trained on the <span class="html-italic">four wind disasters</span> and tested on the Joplin tornado.</p>
Full article ">Figure 14
<p>Two buildings before and after the Joplin tornado, and the predictions made by the model trained on the Joplin tornado and the model trained on the <span class="html-italic">four wind disasters</span>.</p>
Full article ">Figure 15
<p>The true positive (TP), true negative (TN), false positive (FP), and false negative (FN) predictions on 2395 out of 12,165 buildings of the Joplin tornado by the model trained on a mixture of four disasters with wind damage.</p>
Full article ">Figure 16
<p>Confusion matrix of the model trained on the Midwest flooding and tested on 10% of the Nepal flooding.</p>
Full article ">Figure 17
<p>Two examples of post disaster buildings that were misclassified by the model trained on the Midwest flooding and tested on the Nepal flooding.</p>
Full article ">Figure 18
<p>The predictions, true negatives (TN), and false negatives (FN) on 9 out of the 29,808 buildings of the Nepal flooding by the model trained on the Midwest flooding.</p>
Full article ">
28 pages, 7520 KiB  
Article
Detecting Classic Maya Settlements with Lidar-Derived Relief Visualizations
by Amy E. Thompson
Remote Sens. 2020, 12(17), 2838; https://doi.org/10.3390/rs12172838 - 1 Sep 2020
Cited by 32 | Viewed by 5160
Abstract
In the past decade, Light Detection and Ranging (lidar) has fundamentally changed our ability to remotely detect archaeological features and deepen our understanding of past human-environment interactions, settlement systems, agricultural practices, and monumental constructions. Across archaeological contexts, lidar relief visualization techniques test how [...] Read more.
In the past decade, Light Detection and Ranging (lidar) has fundamentally changed our ability to remotely detect archaeological features and deepen our understanding of past human-environment interactions, settlement systems, agricultural practices, and monumental constructions. Across archaeological contexts, lidar relief visualization techniques test how local environments impact archaeological prospection. This study used a 132 km2 lidar dataset to assess three relief visualization techniques—sky-view factor (SVF), topographic position index (TPI), and simple local relief model (SLRM)—and object-based image analysis (OBIA) on a slope model for the non-automated visual detection of small hinterland Classic (250–800 CE) Maya settlements near the polities of Uxbenká and Ix Kuku’il in Southern Belize. Pedestrian survey in the study area identified 315 plazuelas across a 35 km2 area; the remaining 90 km2 in the lidar dataset is yet to be surveyed. The previously surveyed plazuelas were compared to the plazuelas visually identified on the TPI and SLRM. In total, an additional 563 new possible plazuelas were visually identified across the lidar dataset, using TPI and SLRM. Larger plazuelas, and especially plazuelas located in disturbed environments, are often more likely to be detected in a visual assessment of the TPI and SLRM. These findings emphasize the extent and density of Classic Maya settlements and highlight the continued need for pedestrian survey to ground-truth remotely identified archaeological features and the impact of modern anthropogenic behaviors for archaeological prospection. Remote sensing and lidar have deepened our understanding of past human settlement systems and low-density urbanism, processes that we experience today as humans residing in modern cities. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the Classic Maya polities (black triangles) of Southern Belize region mentioned in the text (main) and location within the Maya region (inset). The 132 km<sup>2</sup> lidar area (darkened) and pedestrian survey area (red outline) are highlighted, along with the location Ix Kuku’il and Uxbenká. (Base map images are the intellectual property of Esri and is used herein under license. Copyright 2014 Esri and its licensors. All rights reserved.).</p>
Full article ">Figure 2
<p>Settlement map of all documented Classic Maya structures at Uxbenká and Ix Kuku’il. Area of pedestrian survey is highlighted in red. (Note: Ix Kuku’il settlement system extends north beyond the lidar acquisition area. These plazuelas were not included in the comparative analysis).</p>
Full article ">Figure 3
<p>Flowchart of methods (blue boxes), inputs (green boxes), and outputs (purple boxes) used in the analysis of relief visualization techniques (yellow boxes) for archaeological prospection.</p>
Full article ">Figure 4
<p>Relief visualization techniques used this this study, and the ability to visually detect archaeological features. This new identified plazuela is larger than most in the lidar dataset but nicely reflects the variations in visualizations for archaeological prospection (see <a href="#remotesensing-12-02838-f005" class="html-fig">Figure 5</a>). (<b>a</b>) Hillshade: Modified (flattened) hilltops are difficult to detect, but some structures are visible. (<b>b</b>) Slope: The flattened hilltop is visible, as are some of the structures. (<b>c</b>) Sky-view factor (SVF): Flattened plazuelas and structures are difficult to detect, although potential looting activity stands out as dark spots on the image. (<b>d</b>) Object-based image analysis (OBIA): Flattened plazuelas are visually detectable, but structures are difficult to detect. (<b>e</b>) Topographic position index (TPI): Flattened plazuelas are visually detectable by the red circular/rectangular perimeter; structures are also visible. (<b>f</b>) Simple local relief model (SLRM): Plazuelas are easy to detect by the lighter plaza surrounded by a darker red perimeter suggestive of a flattened hilltop. Structures and possible looting activity are also easily visible. Plazuelas are most easily visually detectable on the SLRM and TPI images, compared to the OBIA and SVF images.</p>
Full article ">Figure 5
<p>TPI results of a newly identified plazuela on (<b>a</b>) the bare earth model (Digital Terrain Model (DTM)) hillshade, (<b>b</b>) the DSM with annual <span class="html-italic">milpas</span> emphasized, and (<b>c</b>–<b>f</b>) TPI results. Input parameters for TPI: TPI raster calculator method, using (<b>c</b>) 10x10 cells; (<b>d</b>) topography tool (TT) height of 2 and width of 10; (<b>e</b>) topography tool (TT) height of 10 and width of 10; and (<b>f</b>) topography tool (TT) height of 33 and width of 33.</p>
Full article ">Figure 6
<p>The location of plazuelas identified with both SLRM and TPI (green dots), only SLRM (blue dots), and only TPI (pink dots). The background is the TPI raster.</p>
Full article ">Figure 7
<p>The location of all the plazuelas identified with SLRM and TPI (green dots), compared to plazuelas documented during pedestrian survey (blue dots) and hilltops with no archaeological features documented during pedestrian survey (red dots). Pedestrian survey boundaries are indicated by the gray outline. The background is the TPI raster.</p>
Full article ">Figure 8
<p>A comparison of (<b>a</b>) large, (<b>b</b>) medium, and (<b>c</b>) medium and small plazuelas. Larger plazuelas and structures on them (indicated on the maps) tend to be more visible than smaller plazuelas and associated structures. However, these relationships are not always consistent, as vegetation cover and structure size also impact the ability to detect archaeological features. Plazuela outlines are in gray on the SLRM raster background. (<b>d</b>–<b>f</b>) Point-cloud profiles of images (<b>a</b>–<b>c</b>) showing all (point) returns (upper profiles; filter: classification; gray and green = vegetation; orange = ground return) and ground returns (lower profiles; filter: elevation). Elevations are marked on the left of the all returns point cloud profiles, indicating the lowest elevation (bottom elevation), highest ground elevation (middle elevation), and highest vegetation elevation (upper elevation).</p>
Full article ">Figure 9
<p>An example of the impact of vegetation on the relief visualization of small plazuelas from Uxbenká Settlement Group 83. (<b>a</b>) The location of structures and the location and size of Plazuelas A and B on a lidar hillshade DTM in comparison to (<b>b</b>) 2011 <span class="html-italic">milpas</span> and forest areas on the lidar DSM. (<b>c</b>) A 2010 <span class="html-italic">milpa</span> is visible on the GeoEye-1 false color infrared (FCIR) satellite image acquired in 2010. The plazuela in the 2011 <span class="html-italic">milpa</span> is easier to identify on (<b>d</b>) OBIA, (<b>e</b>) TPI, and (<b>f</b>) SLRM, compared to the plazuela within the forested area. Point cloud profiles with (<b>g</b>) all returns (filter: classification; gray = unclassified (vegetation); orange = ground) and (<b>h</b>) ground returns (filter: elevation) showing that areas with forest regrowth often have fewer ground return points.</p>
Full article ">Figure 10
<p>The location of <span class="html-italic">milpas</span> from (<b>a</b>) 2008, (<b>b</b>) 2009, (<b>c</b>) 2010, and (<b>d</b>) 2011, on a GeoEye-1 FCIR satellite image. Surveyed plazuelas within the annual <span class="html-italic">milpa</span> (large dark gray dot), and SLRM and TPI identified plazuelas (medium light gray dot) are indicated on the maps. More plazuelas are visible in 2011 <span class="html-italic">milpas</span> than in <span class="html-italic">milpas</span> from previous years. The location of surveyed (small white dot) and SLRM and TPI identified (small black dot) plazuelas are also noted.</p>
Full article ">Figure 11
<p>The visual effect of anthropogenic landscape modifications on the (<b>a</b>) DSM, (<b>d</b>) DTM, (<b>e</b>) TPI, and (<b>f</b>) SLRM, which should be considered during the remote detection of archaeological features. The GeoEye-1 satellite image in (<b>b</b>) color and (<b>c</b>) FCIR show the location of structures, roads and tracks, and modified landscapes.</p>
Full article ">Figure 12
<p>Vegetation type and height impacts archaeological prospection. (<b>a</b>–<b>c</b>,<b>f</b>) Shifting agriculture and (<b>a</b>–<b>c</b>,<b>e</b>) tree orchards have differing effects on the (<b>d</b>) lidar cloud, creating a mosaic landscape.</p>
Full article ">Figure 13
<p>(<b>a</b>) Vegetation types, 2011 <span class="html-italic">milpas</span>, orchards, and areas of forest regrowth are visible on the lidar DSM and impact the lidar point cloud. (<b>b</b>) Ground points are randomly dispersed in the forest regrowth area and show the outline of orchard trees resulting in large patches, with few points, especially when compared to (<b>c</b>) the total point cloud coverage, creating challenges for archaeological prospection of small features. The ground points of the 2011 <span class="html-italic">milpa</span> are higher than the surrounding landscape, thus facilitating archaeological prospection.</p>
Full article ">
20 pages, 6208 KiB  
Article
Arctic Sea Level Budget Assessment during the GRACE/Argo Time Period
by Roshin P. Raj, Ole B. Andersen, Johnny A. Johannessen, Benjamin D. Gutknecht, Sourav Chatterjee, Stine K. Rose, Antonio Bonaduce, Martin Horwath, Heidi Ranndal, Kristin Richter, Hindumathi Palanisamy, Carsten A. Ludwigsen, Laurent Bertino, J. Even Ø. Nilsen, Per Knudsen, Anna Hogg, Anny Cazenave and Jérôme Benveniste
Remote Sens. 2020, 12(17), 2837; https://doi.org/10.3390/rs12172837 - 1 Sep 2020
Cited by 16 | Viewed by 5912
Abstract
Sea level change is an important indicator of climate change. Our study focuses on the sea level budget assessment of the Arctic Ocean using: (1) the newly reprocessed satellite altimeter data with major changes in the processing techniques; (2) ocean mass change data [...] Read more.
Sea level change is an important indicator of climate change. Our study focuses on the sea level budget assessment of the Arctic Ocean using: (1) the newly reprocessed satellite altimeter data with major changes in the processing techniques; (2) ocean mass change data derived from GRACE satellite gravimetry; (3) and steric height estimated from gridded hydrographic data for the GRACE/Argo time period (2003–2016). The Beaufort Gyre (BG) and the Nordic Seas (NS) regions exhibit the largest positive trend in sea level during the study period. Halosteric sea level change is found to dominate the area averaged sea level trend of BG, while the trend in NS is found to be influenced by halosteric and ocean mass change effects. Temporal variability of sea level in these two regions reveals a significant shift in the trend pattern centered around 2009–2011. Analysis suggests that this shift can be explained by a change in large-scale atmospheric circulation patterns over the Arctic. The sea level budget assessment of the Arctic found a residual trend of more than 1.0 mm/yr. This nonclosure of the sea level budget is further attributed to the limitations of the three above mentioned datasets in the Arctic region. Full article
(This article belongs to the Collection Feature Papers for Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Sea level anomaly (SLA) trend (mm/yr) for the time period 2003–2016 with the marked locations of the Beaufort Gyre (BG), the Nordic Seas (NS), Barents Sea (BS) and the Russian shelf region (RS). SLA trends (mm/yr) of percentile 2.5% (<b>b</b>) and 97.5% (<b>c</b>) for the same time period.</p>
Full article ">Figure 2
<p>Area averaged sea level time series for the Nordic Seas (NS; red) and the Beaufort Gyre (BG; blue). Locations shown in <a href="#remotesensing-12-02837-f001" class="html-fig">Figure 1</a>a. The dashed vertical line indicates the separation of the time periods T1 and T2.</p>
Full article ">Figure 3
<p>Sea level pressure (SLP) anomaly averaged for the time period 2003–2009 (<b>a</b>); Arctic dipole pattern) and 2010–2016 (<b>b</b>); Arctic Oscillation pattern).</p>
Full article ">Figure 4
<p>SLA trend (mm/yr) for the time period (<b>a</b>) 2003–2009 and (<b>b</b>) 2010–2016.</p>
Full article ">Figure 5
<p>GSFC ocean mass trend during the time period (<b>a</b>) 2003–2009 and (<b>b</b>) 2010–2016. Mean ITSG ocean mass trend during the time period (<b>c</b>) 2003–2009 and (<b>d</b>) 2010–2016.</p>
Full article ">Figure 6
<p>Area averaged ocean mass change from GSFC masons in the Nordic Seas (red) and the Beaufort Gyre (blue). Locations shown in <a href="#remotesensing-12-02837-f001" class="html-fig">Figure 1</a>a. The dashed vertical line indicates the separation of the time periods T1 and T2.</p>
Full article ">Figure 7
<p>Trend in steric sea level height (<b>a</b>,<b>b</b>) and its components thermosteric (<b>c</b>,<b>d</b>) and halosteric (<b>e</b>,<b>f</b>) sea level during the time periods 2003–2009 (top panels) and 2010–2016 (bottom panels).</p>
Full article ">Figure 8
<p>Area averaged monthly anomalies of the steric sea level (in red) and its components thermosteric (blue) and halosteric (green) over the (<b>a</b>) Nordic Seas and the (<b>b</b>) Beaufort Gyre. Locations shown in <a href="#remotesensing-12-02837-f001" class="html-fig">Figure 1</a>a. The dashed vertical line indicates the separation of the time periods T1 and T2.</p>
Full article ">Figure 9
<p>Arctic Sea level Budget: Area averaged monthly sea level (red), the sum of GSFC ocean mass change and the EN4 steric height estimates (SHOM, blue), and the residual (green) for the entire Arctic during the time period 2003–2016. The dashed vertical line indicates the separation of the time periods T1 and T2.</p>
Full article ">Figure 10
<p>Normalized, 12-month running mean of Arctic Oscillation (AO) index (black), North Atlantic Oscillation (NAO) index (blue), and Beaufort Gyre SLP (inversed, red). The dashed vertical line indicates the separation of the time periods T1 and T2.</p>
Full article ">
21 pages, 5052 KiB  
Article
Estimating Rural Electric Power Consumption Using NPP-VIIRS Night-Time Light, Toponym and POI Data in Ethnic Minority Areas of China
by Fei Zhao, Jieyu Ding, Sujin Zhang, Guize Luan, Lu Song, Zhiyan Peng, Qingyun Du and Zhiqiang Xie
Remote Sens. 2020, 12(17), 2836; https://doi.org/10.3390/rs12172836 - 1 Sep 2020
Cited by 20 | Viewed by 4763
Abstract
Aiming at the problem that the estimation of electric power consumption (EPC) by using night-time light (NTL) data is mostly concentrated in large areas, a method for estimating EPC in rural areas is proposed. Rural electric power consumption (REPC) is a key indicator [...] Read more.
Aiming at the problem that the estimation of electric power consumption (EPC) by using night-time light (NTL) data is mostly concentrated in large areas, a method for estimating EPC in rural areas is proposed. Rural electric power consumption (REPC) is a key indicator of the national socio-economic development. Despite an improved quality of life in rural areas, there is still a big gap between electricity consumption between rural residents and urban residents in China. The experiment takes REPC as the research target, selects Dehong (DH) Dai Jingpo Autonomous Prefecture of Yunnan Province as an example, and uses the NTL data from the Visible Infrared Imaging Radiometer Suite (VIIRS) Day–Night Band (DNB) carried by the Suomi National Polar-orbiting Partnership (NPP) Satellite from 2012 to 2017, toponym and points-of-interest (POI) data as the main data source. By performing kernel density estimation to extract the urban center and rural area boundaries in the prefecture, and combining the county-level boundary data and electric power data, a linear regression model of the total rural NTL intensity and REPC is estimated. Finally, according to the model, the EPC in ethnic minority rural areas is estimated at a 1-km spatial resolution. The results show that the NPP-REPC model can simulate REPC within a small average error (17.8%). Additionally, there are distinct spatial differences of REPC in ethnic minority areas. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Administrative divisions and geographical position of the Dehong (DH) Prefecture.</p>
Full article ">Figure 2
<p>Flowchart of the methodology.</p>
Full article ">Figure 3
<p>The night-time stable light (NSL) data of National Polar-Orbiting Partnership’s Visible Infrared Imaging Radiometer Suite (NPP-VIIRS) in 2017.</p>
Full article ">Figure 4
<p>The results of kernel density estimation (KDE) at different bandwidths: (<b>a</b>) Results of KDE when the bandwidth is 300 m; (<b>b</b>) Results of KDE when the bandwidth is 500 m; (<b>c</b>) Results of KDE when the bandwidth is 800 m; (<b>d</b>) Results of KDE when the bandwidth is 1000 m; (<b>e</b>) Results of KDE when the bandwidth is 1500 m; (<b>f</b>) Results of KDE when the bandwidth is 3000 m.</p>
Full article ">Figure 4 Cont.
<p>The results of kernel density estimation (KDE) at different bandwidths: (<b>a</b>) Results of KDE when the bandwidth is 300 m; (<b>b</b>) Results of KDE when the bandwidth is 500 m; (<b>c</b>) Results of KDE when the bandwidth is 800 m; (<b>d</b>) Results of KDE when the bandwidth is 1000 m; (<b>e</b>) Results of KDE when the bandwidth is 1500 m; (<b>f</b>) Results of KDE when the bandwidth is 3000 m.</p>
Full article ">Figure 5
<p>The division results of 2017 NPP &amp; PT composite index.</p>
Full article ">Figure 6
<p>The results of the linear regression model.</p>
Full article ">Figure 7
<p>Spatialization results of electric power consumption (EPC) in the DH Prefecture in 2017.</p>
Full article ">Figure 8
<p>EPC comparison results for the DH Prefecture in 2017.</p>
Full article ">Figure 9
<p>Rural electric power consumption (REPC) results of ethnic minorities.</p>
Full article ">Figure 10
<p>REPC belt results.</p>
Full article ">
Previous Issue
Back to TopTop