[go: up one dir, main page]

Next Issue
Volume 10, June
Previous Issue
Volume 10, April
 
 
remotesensing-logo

Journal Browser

Journal Browser

Remote Sens., Volume 10, Issue 5 (May 2018) – 148 articles

Cover Story (view full-size image): The Advanced Himawari Imager (AHI), on board the Japanese Himawari-8 geostationary weather satellite, provides an opportunity to observe the diurnal changes in aerosol properties over the Asia-Pacific region. Using the hourly observation data from AHI, a new dark target algorithm was preliminarily proposed to retrieve the aerosol optical depth (AOD) at 1 km and 5 km resolutions over mainland China. The comparison and validation results show that our AHI AOD data demonstrate good agreement with the ground-based and satellite AOD measurements (AERONET AOD, MODIS DT AOD and MODIS DB AOD), but with a slight overestimation. Generally, the preliminary AHI AOD data well reflects the spatial and temporal variation characteristics of aerosol properties in China. View Paper here.
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
13 pages, 23574 KiB  
Technical Note
Annual New Production of Phytoplankton Estimated from MODIS-Derived Nitrate Concentration in the East/Japan Sea
by Huitae Joo, Dabin Lee, Seung Hyun Son and Sang Heon Lee
Remote Sens. 2018, 10(5), 806; https://doi.org/10.3390/rs10050806 - 22 May 2018
Cited by 6 | Viewed by 5344
Abstract
Our main objective in this study was to determine the inter-annual variation of the annual new production in the East/Japan Sea (EJS), which was estimated from MODIS-aqua satellite-derived sea surface nitrate (SSN). The new production was extracted from northern (>40° N) and southern [...] Read more.
Our main objective in this study was to determine the inter-annual variation of the annual new production in the East/Japan Sea (EJS), which was estimated from MODIS-aqua satellite-derived sea surface nitrate (SSN). The new production was extracted from northern (>40° N) and southern (>40° N) part of EJS based on Sub Polar Front (SPF). Based on the SSN concentrations derived from satellite data, we found that the annual new production (Mean ± S.D = 85.6 ± 10.1 g C m−2 year−1) in the northern part of the EJS was significantly higher (t-test, p < 0.01) than that of the southern part of the EJS (Mean ± S.D = 65.6 ± 3.9 g C m−2 year−1). Given the relationships between the new productions and sea surface temperature (SST) in this study, the new production could be more susceptible in the northern part than the southern part of the EJS under consistent SST warming. Since the new production estimated in this study is only based on the nitrate inputs into the euphotic depths during the winter, new productions from additional nitrate sources (e.g., the nitrate upward flux through the MLD and atmospheric deposition) should be considered for estimating the annual new production. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Colour)
Show Figures

Figure 1

Figure 1
<p>Field measurement stations in the East/Japan Sea (EJS). Squared dots indicate the NIFS (National Institute of Fisheries Sciences, Korea) stations. Cross marks are the JMA (Japan Meteorological Agency, Japan) stations. Dots are the MOF (Ministry of Oceans and Fisheries, Korea) stations. Red dots are the locations of C/N ratio measured previously.</p>
Full article ">Figure 2
<p>Relationships between Sea Surface Nitrate (SSN) and Sea Surface Temperature (SST) in the northern (<b>a</b>) and southern (<b>b</b>) East/Japan Sea.</p>
Full article ">Figure 3
<p>Correlations between field-measured Sea Surface Nitrate (SSN) and estimated SSN in the northern (<b>a</b>) and southern (<b>b</b>) East/Japan Sea. Dashed lines represent 1:1 lines. Red lines represent the 95% prediction bounds. Blue lines show the regression curves between in situ SSN and estimated SSN from the algorithm developed by Goes et al. (2000).</p>
Full article ">Figure 4
<p>Climatological monthly images of Sea Surface Nitrate (SSN) in the East/Japan Sea (EJS).</p>
Full article ">Figure 5
<p>Time series of monthly values in Sea Surface Nitrate (SSN) for the northern and southern East/Japan Sea (EJS) from July 2002 to December 2015.</p>
Full article ">Figure 6
<p>Long-term pattern of the annual Sea Surface Nitrate (SSN) in the East/Japan Sea (EJS) from 2003 to 2015.</p>
Full article ">Figure 7
<p>Long-term pattern of the annual new production in the East/Japan Sea (EJS) from 2003 to 2015.</p>
Full article ">Figure 8
<p>Anomalies of the annual new productions in the northern and southern East/Japan Sea (EJS) during the study period. The blue color bars and red color bars represent the northern and southern East/Japan Sea (EJS), respectively.</p>
Full article ">Figure 9
<p>Relationships between the annual new production and Sea Surface Temperature (SST) in March in the northern (<b>a</b>) and southern (<b>b</b>) East/Japan Sea (EJS).</p>
Full article ">
18 pages, 5928 KiB  
Article
Estimation of Vegetable Crop Parameter by Multi-temporal UAV-Borne Images
by Thomas Moeckel, Supriya Dayananda, Rama Rao Nidamanuri, Sunil Nautiyal, Nagaraju Hanumaiah, Andreas Buerkert and Michael Wachendorf
Remote Sens. 2018, 10(5), 805; https://doi.org/10.3390/rs10050805 - 22 May 2018
Cited by 63 | Viewed by 8942
Abstract
3D point cloud analysis of imagery collected by unmanned aerial vehicles (UAV) has been shown to be a valuable tool for estimation of crop phenotypic traits, such as plant height, in several species. Spatial information about these phenotypic traits can be used to [...] Read more.
3D point cloud analysis of imagery collected by unmanned aerial vehicles (UAV) has been shown to be a valuable tool for estimation of crop phenotypic traits, such as plant height, in several species. Spatial information about these phenotypic traits can be used to derive information about other important crop characteristics, like fresh biomass yield, which could not be derived directly from the point clouds. Previous approaches have often only considered single date measurements using a single point cloud derived metric for the respective trait. Furthermore, most of the studies focused on plant species with a homogenous canopy surface. The aim of this study was to assess the applicability of UAV imagery for capturing crop height information of three vegetables (crops eggplant, tomato, and cabbage) with a complex vegetation canopy surface during a complete crop growth cycle to infer biomass. Additionally, the effect of crop development stage on the relationship between estimated crop height and field measured crop height was examined. Our study was conducted in an experimental layout at the University of Agricultural Science in Bengaluru, India. For all the crops, the crop height and the biomass was measured at five dates during one crop growth cycle between February and May 2017 (average crop height was 42.5, 35.5, and 16.0 cm for eggplant, tomato, and cabbage). Using a structure from motion approach, a 3D point cloud was created for each crop and sampling date. In total, 14 crop height metrics were extracted from the point clouds. Machine learning methods were used to create prediction models for vegetable crop height. The study demonstrates that the monitoring of crop height using an UAV during an entire growing period results in detailed and precise estimates of crop height and biomass for all three crops (R2 ranging from 0.87 to 0.97, bias ranging from −0.66 to 0.45 cm). The effect of crop development stage on the predicted crop height was found to be substantial (e.g., median deviation increased from 1% to 20% for eggplant) influencing the strength and consistency of the relationship between point cloud metrics and crop height estimates and, thus, should be further investigated. Altogether the results of the study demonstrate that point cloud generated from UAV-based RGB imagery can be used to effectively measure vegetable crop biomass in larger areas (relative error = 17.6%, 19.7%, and 15.2% for eggplant, tomato, and cabbage, respectively) with a similar accuracy as biomass prediction models based on measured crop height (relative error = 21.6, 18.8, and 15.2 for eggplant, tomato, and cabbage). Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) India’s political map shows the location of Bengaluru; (<b>B</b>) The experimental design of the study. S indicates plots in which all destructive measured parameters were sampled (i.e., biomass harvest). H indicates plots in which all non-destructive measurements were conducted (i.e., spectral sampling). S plots were used for model calibration, while H plots were used for model validation.</p>
Full article ">Figure 2
<p>Process chain of the analysis. Yellow box (<b>left</b>) describes the biomass sampling and modelling, the green box (<b>centre</b>) shows the crop height sampling, and the blue box (<b>right</b>) describes the point cloud processing. S-plot indicates plots in which all destructive measured parameters were sampled (i.e., biomass harvest). H indicates plots in which only non-destructive measurements were conducted (i.e., spectral sampling). S plots were used for model calibration, while H plots were used for model validation. * Green-Red Vegetation Index.</p>
Full article ">Figure 3
<p>Field measured average crop height versus predicted average crop height for random forest regression (<b>top</b>) and support vector regression (<b>bottom</b>). From the left to the right, the results are presented for eggplant, tomato, and cabbage.</p>
Full article ">Figure 4
<p>Relative deviation of the predicted crop height values based on random forest regression (<b>top</b>) and support vector regression (<b>bottom</b>) from the measured crop height values for each sampling date (01–05). From the left to the right, the results are presented for eggplant, tomato, and cabbage.</p>
Full article ">Figure 5
<p>Biomass versus the (<b>top</b>) field measured crop height and (<b>below</b>) predicted crop height based on random forest regression.</p>
Full article ">Figure 6
<p>Predicted versus observed biomass values based on (<b>top</b>) field measured crop height and (<b>bottom</b>) predicted crop height. The symbols indicate the five sampling dates: (square) sampling date 1, (circle) sampling date 2, (triangle) sampling date 3, (plus) sampling date 4, and (cross) sampling date 5. The black lines indicate the 1:1 fit of the values.</p>
Full article ">Figure A1
<p>Exemplary pictures of three crops used in the study. Top: Point clouds. Bottom: Photographs. From left to right: Tomato, cabbage, and eggplant. All images and point clouds were collected at the second sampling date.</p>
Full article ">
23 pages, 2350 KiB  
Article
Performance Assessment of the COMET Cloud Fractional Cover Climatology across Meteosat Generations
by Jędrzej S. Bojanowski, Reto Stöckli, Anke Duguay-Tetzlaff, Stephan Finkensieper and Rainer Hollmann
Remote Sens. 2018, 10(5), 804; https://doi.org/10.3390/rs10050804 - 22 May 2018
Cited by 10 | Viewed by 5723
Abstract
The CM SAF Cloud Fractional Cover dataset from Meteosat First and Second Generation (COMET, https://doi.org/10.5676/EUM_SAF_CM/CFC_METEOSAT/V001) covering 1991–2015 has been recently released by the EUMETSAT Satellite Application Facility for Climate Monitoring (CM SAF). COMET is derived from the MVIRI and SEVIRI imagers aboard geostationary [...] Read more.
The CM SAF Cloud Fractional Cover dataset from Meteosat First and Second Generation (COMET, https://doi.org/10.5676/EUM_SAF_CM/CFC_METEOSAT/V001) covering 1991–2015 has been recently released by the EUMETSAT Satellite Application Facility for Climate Monitoring (CM SAF). COMET is derived from the MVIRI and SEVIRI imagers aboard geostationary Meteosat satellites and features a Cloud Fractional Cover (CFC) climatology in high temporal (1 h) and spatial (0.05° × 0.05°) resolution. The CM SAF long-term cloud fraction climatology is a unique long-term dataset that resolves the diurnal cycle of cloudiness. The cloud detection algorithm optimally exploits the limited information from only two channels (broad band visible and thermal infrared) acquired by older geostationary sensors. The underlying algorithm employs a cyclic generation of clear sky background fields, uses continuous cloud scores and runs a naïve Bayesian cloud fraction estimation using concurrent information on cloud state and variability. The algorithm depends on well-characterized infrared radiances (IR) and visible reflectances (VIS) from the Meteosat Fundamental Climate Data Record (FCDR) provided by EUMETSAT. The evaluation of both Level-2 (instantaneous) and Level-3 (daily and monthly means) cloud fractional cover (CFC) has been performed using two reference datasets: ground-based cloud observations (SYNOP) and retrievals from an active satellite instrument (CALIPSO/CALIOP). Intercomparisons have employed concurrent state-of-the-art satellite-based datasets derived from geostationary and polar orbiting passive visible and infrared imaging sensors (MODIS, CLARA-A2, CLAAS-2, PATMOS-x and CC4CL-AVHRR). Averaged over all reference SYNOP sites on the monthly time scale, COMET CFC reveals (for 0–100% CFC) a mean bias of −0.14%, a root mean square error of 7.04% and a trend in bias of −0.94% per decade. The COMET shortcomings include larger negative bias during the Northern Hemispheric winter, lower precision for high sun zenith angles and high viewing angles, as well as an inhomogeneity around 1995/1996. Yet, we conclude that the COMET CFC corresponds well to the corresponding SYNOP measurements, and it is thus useful to extend in both space and time century-long ground-based climate observations. Full article
(This article belongs to the Special Issue Assessment of Quality and Usability of Climate Data Records)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Overview of the Meteosat record used as input for the generation of COMET.</p>
Full article ">Figure 2
<p>Overview of Advanced Very High Resolution Radiometer (AVHRR) measurements used to calculate the PATMOS-x CFC monthly means.</p>
Full article ">Figure 3
<p>Performance statistics of the instantaneous COMET CFC during 1991–2015 as compared to synoptic observations at 237 sites: mean bias error (<b>left</b>), bias-corrected root mean square error (<b>right</b>).</p>
Full article ">Figure 4
<p>Performance statistics of the level-2 COMET CFC during 1991–2015 as compared to synoptic observations at 237 sites in relation to sun zenith angle (<b>a</b>,<b>c</b>) and satellite view zenith angle (<b>b</b>,<b>d</b>). Each box in the box-and-whisker plot (<b>a</b>,<b>c</b>) is generated from all level-2 COMET-SYNOP collocations (given by <span class="html-italic">N</span>) for Satellite Zenith Angle (SZA) classes given by the <span class="html-italic">x</span>-axis. Boxes denote 1st and 3rd quartiles (with thick horizontal median line). Whiskers indicate largest and lowest values within 1.5 times the interquartile range, while circles represent values beyond this.</p>
Full article ">Figure 5
<p>Performance statistics of the COMET daily (<b>a</b>) and monthly (<b>b</b>) CFC means during 1991–2015 as compared to synoptic observations at 237 sites. Shaded areas reveal the accuracy requirements.</p>
Full article ">Figure 6
<p>Time series of mean bias error (<b>a</b>) and bias-corrected root mean square error (<b>b</b>) of COMET as compared to synoptic observations at 237 sites in 1991–2015. The thick lines with dots represent annual means. The black dashed line represents a Theil–Sen linear trend provided with its Mann–Kendall statistical significance. Shaded areas reveal the accuracy requirements. A yellow solid line reveals the <math display="inline"> <semantics> <mrow> <mi>T</mi> <mo>(</mo> <mi>k</mi> <mo>)</mo> </mrow> </semantics> </math> statistic from the Standard Normal Homogeneity Test.</p>
Full article ">Figure 7
<p>Comparison of COMET CFC monthly means (0–100%) of 2005 derived from Meteosat First Generation (MFG) and Meteosat Second Generation (MSG). Upper panels present COMET MFG (<b>a</b>) and COMET MSG (<b>b</b>) annual means. Lower panels show a difference bewteen COMET MFG and COMET MSG for January (<b>c</b>) and June (<b>d</b>).</p>
Full article ">Figure 8
<p>Five-day moving average of COMET and CALIPSO cloud mask (yielding a cloud fraction, <b>top</b>) and a difference between both (<b>bottom</b>).</p>
Full article ">Figure 9
<p>COMET cloud scores as a function of the COT threshold used to discriminate clear and cloudy CALIOP observations. KSS denotes the Hanssen–Kuiper’s Discriminant.</p>
Full article ">Figure 10
<p>Probability of detection for COMET cloud mask resolved by cloud type. The cloud type is taken at the CALIOP cloud layer where the top-to-bottom integrated optical thickness exceeds 0.2. Blue bars were computed by interpreting all CALIOP measurements with total column COT &gt; 0 as cloudy. Light-blue bars were derived using COT &gt; 0.2 as cloud criterion. Grey bars give the number of matchups by cloud type.</p>
Full article ">Figure 11
<p>Mean annual CFC of Meteosat and reference CALIOP, and skill scores derived for COT &gt; 0 (<b>first row</b>) and COT &gt; 0.2 (<b>second row</b>).</p>
Full article ">Figure 12
<p>Time series of monthly (<b>a</b>) and aggregated to yearly (<b>b</b>) mean bias error of COMET and intercompared satellite-based CDR as compared to synoptic observations at 237 sites. Values in brackets on panel (<b>a</b>) indicate mean MBE for the whole analysed period.</p>
Full article ">Figure 13
<p>Differences of COMET versus CLAAS-2 (<b>a</b>), CLARA-A2 (<b>b</b>), MODIS (<b>c</b>), CC4CL-AVHRR (<b>d</b>) and PATMOS-x (<b>e</b>) monthly means (in CFC 0–100%) for 2005 at 1° × 1° over Meteosat disc. Panel (<b>f</b>) presents annual (2005) zonal mean differences in cloud fractional cover between COMET and intercompared datasets.</p>
Full article ">
17 pages, 1550 KiB  
Article
Correcting Measurement Error in Satellite Aerosol Optical Depth with Machine Learning for Modeling PM2.5 in the Northeastern USA
by Allan C. Just, Margherita M. De Carli, Alexandra Shtein, Michael Dorman, Alexei Lyapustin and Itai Kloog
Remote Sens. 2018, 10(5), 803; https://doi.org/10.3390/rs10050803 - 22 May 2018
Cited by 67 | Viewed by 8586
Abstract
Satellite-derived estimates of aerosol optical depth (AOD) are key predictors in particulate air pollution models. The multi-step retrieval algorithms that estimate AOD also produce quality control variables but these have not been systematically used to address the measurement error in AOD. We compare [...] Read more.
Satellite-derived estimates of aerosol optical depth (AOD) are key predictors in particulate air pollution models. The multi-step retrieval algorithms that estimate AOD also produce quality control variables but these have not been systematically used to address the measurement error in AOD. We compare three machine-learning methods: random forests, gradient boosting, and extreme gradient boosting (XGBoost) to characterize and correct measurement error in the Multi-Angle Implementation of Atmospheric Correction (MAIAC) 1 × 1 km AOD product for Aqua and Terra satellites across the Northeastern/Mid-Atlantic USA versus collocated measures from 79 ground-based AERONET stations over 14 years. Models included 52 quality control, land use, meteorology, and spatially-derived features. Variable importance measures suggest relative azimuth, AOD uncertainty, and the AOD difference in 30–210 km moving windows are among the most important features for predicting measurement error. XGBoost outperformed the other machine-learning approaches, decreasing the root mean squared error in withheld testing data by 43% and 44% for Aqua and Terra. After correction using XGBoost, the correlation of collocated AOD and daily PM2.5 monitors across the region increased by 10 and 9 percentage points for Aqua and Terra. We demonstrate how machine learning with quality control and spatial features substantially improves satellite-derived AOD products for air pollution modeling. Full article
(This article belongs to the Special Issue Aerosol Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study region in Northeastern and Mid-Atlantic USA with 79 unique AERONET stations showing the number of years of coverage for use in measurement error modeling.</p>
Full article ">Figure 2
<p>MAIAC AOD versus AOD-AERONET AOT (aerosol optical thickness) in collocated observations in the Aqua measurement error dataset (<span class="html-italic">n</span> = 8531). Since both AOD and AOT are strictly positive, the empty upper left of the Bland-Altman plot is expected. Marginal histograms show AOD is skewed but AOD-AOT, an estimate of measurement error, is more normally distributed.</p>
Full article ">Figure 3
<p>Maps of MAIAC AOD for 2008-01-16 (<b>a</b>) before; and (<b>b</b>) after correction with our XGBoost measurement error prediction model. The inset shows the density of AOD within this scene.</p>
Full article ">Figure 4
<p>Variable importance predicting measurement error by node impurity from XGBoost for the Aqua and Terra dataset with intervals showing the range of variable importance measures across ten bootstrap-resampling fits of the training dataset.</p>
Full article ">Figure 5
<p>Partial dependence plot of measurement error as a function of relative azimuth for the Aqua training set (<span class="html-italic">n</span> = 7280) from the GBM approach. The marginal histogram shows the bimodal distribution of relative azimuth for these Aqua retrievals, with larger errors (further from zero) seen for the second mode with angle &gt;120° in backscattering conditions.</p>
Full article ">
13 pages, 16498 KiB  
Article
Combining TerraSAR-X and Landsat Images for Emergency Response in Urban Environments
by Shiran Havivi, Ilan Schvartzman, Shimrit Maman, Stanley R. Rotman and Dan G. Blumberg
Remote Sens. 2018, 10(5), 802; https://doi.org/10.3390/rs10050802 - 21 May 2018
Cited by 13 | Viewed by 5236
Abstract
Rapid damage mapping following a disaster event, especially in an urban environment, is critical to ensure that the emergency response in the affected area is rapid and efficient. This work presents a new method for mapping damage assessment in urban environments. Based on [...] Read more.
Rapid damage mapping following a disaster event, especially in an urban environment, is critical to ensure that the emergency response in the affected area is rapid and efficient. This work presents a new method for mapping damage assessment in urban environments. Based on combining SAR and optical data, the method is applicable as support during initial emergency planning and rescue operations. The study focuses on the urban areas affected by the Tohoku earthquake and subsequent tsunami event in Japan that occurred on 11 March 2011. High-resolution TerraSAR-X (TSX) images of before and after the event, and a Landsat 5 image before the event were acquired. The affected areas were analyzed with the SAR data using only one interferometric SAR (InSAR) coherence map. To increase the damage mapping accuracy, the normalized difference vegetation index (NDVI) was applied. The generated map, with a grid size of 50 m, provides a quantitative assessment of the nature and distribution of the damage. The damage mapping shows detailed information about the affected area, with high overall accuracy (89%), and high Kappa coefficient (82%) and, as expected, it shows total destruction along the coastline compared to the inland region. Full article
(This article belongs to the Special Issue Ten Years of TerraSAR-X—Scientific Results)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Pre-event Landsat 5 image (in visible bands 1—blue, 2—green, and 3—red) of the research area from 24 August 2010. Inset: overview of the area most affected by the Tohoku earthquake and the earthquake’s epicenter located at 38.322°N/142.369°E.</p>
Full article ">Figure 2
<p>Research outline.</p>
Full article ">Figure 3
<p>InSAR coherence maps overlying the Landsat 5 image: (<b>a</b>) pre-event coherence map; and (<b>b</b>) co-event coherence map. Light tones (high coherence values) represent no change. Dark tones represent changes (low coherence values). Red polygon represents low coherence values due to vegetation cover, see text for further explanation.</p>
Full article ">Figure 4
<p>NDVI map. Light shades represent vegetated areas, observed mostly along the coastline. Dark shades represent urban areas or man-made structures, and they mostly appear in the inland areas. The red polygon represents a suspected area that was found to be a vegetated area.</p>
Full article ">Figure 5
<p>NDVI and co-event coherence maps generated with a threshold of 0.4 and 0.5, respectively. The combined map displays three classes: vegetated areas (green), damaged (dark red), and undamaged (white) areas. Insets A, B, C, and D are enlarged in <a href="#remotesensing-10-00802-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 6
<p>Insets (<b>A</b>–<b>D</b>) showing selected areas from the entire scene (<a href="#remotesensing-10-00802-f005" class="html-fig">Figure 5</a>). Pre-event optical image; post-event optical image; combined map: vegetation is represented in green, undamaged areas are represented in white, and damaged areas are represented in red.</p>
Full article ">Figure 7
<p>Damage density map per unit area of 50 m. Light colors represent lower probability of damage, dark colors represent higher probability of damage. Green represents vegetation.</p>
Full article ">Figure 8
<p>Grid size of 100 m vs. 50 m, overlying an optical image.</p>
Full article ">Figure 9
<p>Damage density maps with grid size of 100 m vs. 50 m. Insets (<b>A</b>–<b>D</b>) showing selected areas from the entire scene, display the differences of the damage distribution between the two grids.</p>
Full article ">
34 pages, 4255 KiB  
Article
Unsupervised Nonlinear Hyperspectral Unmixing Based on Bilinear Mixture Models via Geometric Projection and Constrained Nonnegative Matrix Factorization
by Bin Yang, Bin Wang and Zongmin Wu
Remote Sens. 2018, 10(5), 801; https://doi.org/10.3390/rs10050801 - 21 May 2018
Cited by 15 | Viewed by 4752
Abstract
Bilinear mixture model-based methods have recently shown promising capability in nonlinear spectral unmixing. However, relying on the endmembers extracted in advance, their unmixing accuracies decrease, especially when the data is highly mixed. In this paper, a strategy of geometric projection has been provided [...] Read more.
Bilinear mixture model-based methods have recently shown promising capability in nonlinear spectral unmixing. However, relying on the endmembers extracted in advance, their unmixing accuracies decrease, especially when the data is highly mixed. In this paper, a strategy of geometric projection has been provided and combined with constrained nonnegative matrix factorization for unsupervised nonlinear spectral unmixing. According to the characteristics of bilinear mixture models, a set of facets are determined, each of which represents the partial nonlinearity neglecting one endmember. Then, pixels’ barycentric coordinates with respect to every endmember are calculated in several newly constructed simplices using a distance measure. In this way, pixels can be projected into their approximate linear mixture components, which reduces greatly the impact of collinearity. Different from relevant nonlinear unmixing methods in the literature, this procedure effectively facilitates a more accurate estimation of endmembers and abundances in constrained nonnegative matrix factorization. The updated endmembers are further used to reconstruct the facets and get pixels’ new projections. Finally, endmembers, abundances, and pixels’ projections are updated alternately until a satisfactory result is obtained. The superior performance of the proposed algorithm in nonlinear spectral unmixing has been validated through experiments with both synthetic and real hyperspectral data, where traditional and state-of-the-art algorithms are compared. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geometric distribution of the data according to each model described by the three most significant principle components (PCs): (<b>a</b>) linear mixture model (LMM); (<b>b</b>) Fan model FM; (<b>c</b>) generalized bilinear model (GBM); (<b>d</b>) polynomial post-nonlinear model (PPNM).</p>
Full article ">Figure 2
<p>An example of virtual endmember’s spectral curve.</p>
Full article ">Figure 3
<p>Geometric interpretation of Property 1. <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>x</mi> </mstyle> </semantics> </math> is a BMM pixel, <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>y</mi> </mstyle> </semantics> </math> is <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>x</mi> </mstyle> </semantics> </math> ’s projection on the line of two endmembers <math display="inline"> <semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn>1</mn> </msub> </mrow> </semantics> </math> and <math display="inline"> <semantics> <mrow> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn>2</mn> </msub> </mrow> </semantics> </math>. If <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>p</mi> </mstyle> </semantics> </math> lies on the line of <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>x</mi> </mstyle> </semantics> </math> and its linear mixture component <math display="inline"> <semantics> <mrow> <msup> <mstyle mathvariant="bold" mathsize="normal"> <mi>x</mi> </mstyle> <mrow> <mi>L</mi> <mi>M</mi> <mi>M</mi> </mrow> </msup> <mo>=</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn>1</mn> </msub> <msub> <mi>s</mi> <mn>1</mn> </msub> <mo>+</mo> <msub> <mstyle mathvariant="bold" mathsize="normal"> <mi>a</mi> </mstyle> <mn>2</mn> </msub> <msub> <mi>s</mi> <mn>2</mn> </msub> </mrow> </semantics> </math>, <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>y</mi> </mstyle> </semantics> </math> has the same abundances with respect to <b>a</b><sub>1</sub> and <b>a</b><sub>2</sub> as <b>x</b> [<a href="#B59-remotesensing-10-00801" class="html-bibr">59</a>,<a href="#B60-remotesensing-10-00801" class="html-bibr">60</a>,<a href="#B61-remotesensing-10-00801" class="html-bibr">61</a>,<a href="#B62-remotesensing-10-00801" class="html-bibr">62</a>] (i.e., <b>y</b> = <b>x</b><span class="html-italic"><sup>LMM</sup></span>). The case can be extended to a high-dimensional space as well.</p>
Full article ">Figure 4
<p>Illustration of projection in the feature subspace. (<b>a</b>) construction of the nonlinear vertex <math display="inline"> <semantics> <mstyle mathvariant="bold" mathsize="normal"> <mi>p</mi> </mstyle> </semantics> </math>; (<b>b</b>) geometric distance measure-based projection using nonlinear planes.</p>
Full article ">Figure 5
<p>Endmember extraction for highly-mixed FM data.</p>
Full article ">Figure 6
<p>The impact of the accuracy of endmembers on projections: (<b>a</b>) true endmembers; (<b>b</b>) vertex component analysis (VCA) endmembers.</p>
Full article ">Figure 7
<p>Flowchart of the proposed method.</p>
Full article ">Figure 8
<p>Diagram of geometric projection onto hyperplanes. (<b>a</b>) LMM; (<b>b</b>) bilinear mixture models (BMMs).</p>
Full article ">Figure 9
<p>Endmember spectra. (<b>a</b>) Maple_Leaves DW92-1; (<b>b</b>) Olivine GDS70.a Fo89 165 um; (<b>c</b>) Calcite CO2004; (<b>d</b>) Quartz GDS74 Sand Ottawa; (<b>e</b>) Grass_dry.9+.1green AMX32; (<b>f</b>) Muscovite GDS107; (<b>g</b>) Alunite GDS82 Na82; (<b>h</b>) Uralite HS345.3B; (<b>i</b>) Mascagnite GDS65.a.</p>
Full article ">Figure 10
<p>Convergence analysis of BMM-based constrained NMF algorithm (BCNMF): (<b>a</b>) MSADs; (<b>b</b>) RMSEs.</p>
Full article ">Figure 11
<p>The unmixing accuracies of BCNMF when regularization parameter <math display="inline"> <semantics> <mi>λ</mi> </semantics> </math> changes: (<b>a</b>) MSADs; (<b>b</b>) RMSEs.</p>
Full article ">Figure 12
<p>Virtual orchard dataset [<a href="#B34-remotesensing-10-00801" class="html-bibr">34</a>]: (<b>a</b>) high-resolution image (tree, soil and weed); (<b>b</b>) endmembers’ spectral curves.</p>
Full article ">Figure 13
<p>Region of interest in the Airborne Visible Infrared Imaging Spectrometer (AVIRIS) Moffett image.</p>
Full article ">Figure 14
<p>Endmembers extracted by BCNMF based on three BMMs: (<b>a</b>) FM; (<b>b</b>) GBM; (<b>c</b>) PPNM.</p>
Full article ">Figure 15
<p>Abundance maps of AVIRIS data estimated by BCNMF.</p>
Full article ">Figure 16
<p>The differences in reconstructed images between the FCLS and nonlinear spectral unmixing algorithms.</p>
Full article ">Figure 17
<p>Sub-region of Hyperspectral Digital Imagery Collection Experiment (HYDICE) data.</p>
Full article ">Figure 18
<p>Abundance maps of HYDICE data estimated by BCNMF based on the FM, GBM, and PPNM, respectively, from top row to bottom row. Each column corresponds to the abundances of five endmembers: (<b>a</b>) water; (<b>b</b>) roofs; (<b>c</b>) roads; (<b>d</b>) grass; (<b>e</b>) trees.</p>
Full article ">Figure 19
<p>The differences in trees’ abundance maps between the fully constrained least squares (FCLS) and nonlinear spectral unmixing algorithms.</p>
Full article ">
23 pages, 41039 KiB  
Article
Hyperspectral and Multispectral Image Fusion via Deep Two-Branches Convolutional Neural Network
by Jingxiang Yang, Yong-Qiang Zhao and Jonathan Cheung-Wai Chan
Remote Sens. 2018, 10(5), 800; https://doi.org/10.3390/rs10050800 - 21 May 2018
Cited by 166 | Viewed by 11567
Abstract
Enhancing the spatial resolution of hyperspectral image (HSI) is of significance for applications. Fusing HSI with a high resolution (HR) multispectral image (MSI) is an important technology for HSI enhancement. Inspired by the success of deep learning in image enhancement, in this paper, [...] Read more.
Enhancing the spatial resolution of hyperspectral image (HSI) is of significance for applications. Fusing HSI with a high resolution (HR) multispectral image (MSI) is an important technology for HSI enhancement. Inspired by the success of deep learning in image enhancement, in this paper, we propose a HSI-MSI fusion method by designing a deep convolutional neural network (CNN) with two branches which are devoted to features of HSI and MSI. In order to exploit spectral correlation and fuse the MSI, we extract the features from the spectrum of each pixel in low resolution HSI, and its corresponding spatial neighborhood in MSI, with the two CNN branches. The extracted features are then concatenated and fed to fully connected (FC) layers, where the information of HSI and MSI could be fully fused. The output of the FC layers is the spectrum of the expected HR HSI. In the experiment, we evaluate the proposed method on Airborne Visible Infrared Imaging Spectrometer (AVIRIS), and Environmental Mapping and Analysis Program (EnMAP) data. We also apply it to real Hyperion-Sentinel data fusion. The results on the simulated and the real data demonstrate that the proposed method is competitive with other state-of-the-art fusion methods. Full article
(This article belongs to the Special Issue Multisensor Data Fusion in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>A typical CNN architecture for super-resolution of single image.</p>
Full article ">Figure 2
<p>(<b>a</b>) Color composite (bands 29, 16, 8) of HR HSI (left) and LR HSI (right), the image is cropped from AVIRIS <span class="html-italic">Indian pines</span> data; (<b>b</b>) Spectra of two different land-covers in HR HSI and LR HSI, the two land-covers are marked in (<b>a</b>).</p>
Full article ">Figure 3
<p>The proposed scheme of deep learning based HSI-MSI fusion.</p>
Full article ">Figure 4
<p>The proposed two branches CNN architecture for HSI-MSI fusion.</p>
Full article ">Figure 5
<p>PSNR values of each band of different fusion results by a factor of two, (<b>a</b>) on AVIRIS <span class="html-italic">Indian pines</span> data; (<b>b</b>) on AVIRIS <span class="html-italic">Moffett Field</span> data; (<b>c</b>) on EnMAP <span class="html-italic">Berlin</span> data.</p>
Full article ">Figure 6
<p>Reconstructed images (band 70) and root mean square error (RMSE) maps of different fusion results by a factor of four. The testing image is cropped from <span class="html-italic">Indian pines</span> of AVIRIS data with size 256 × 256. (<b>a</b>) result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>b</b>) result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>c</b>) result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>d</b>) result of Two-CNN-Fu; (<b>e</b>) RMSE map of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>f</b>) RMSE map of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>g</b>) RMSE map of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>h</b>) RMSE map of Two-CNN-Fu.</p>
Full article ">Figure 7
<p>Reconstructed image (band 80) and root mean square error (RMSE) maps of different fusion results by a factor of two. The testing image is cropped from <span class="html-italic">Moffett Field</span> of AVIRIS data with size 256 × 256. (<b>a</b>) result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>b</b>) result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>c</b>) result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>d</b>) result of Two-CNN-Fu; (<b>e</b>) RMSE map of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>f</b>) RMSE map of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>g</b>) RMSE map of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>h</b>) RMSE map of Two-CNN-Fu.</p>
Full article ">Figure 8
<p>Reconstructed image (band 200) and root mean square error (RMSE) maps of different fusion results by a factor of two. The testing image is cropped from <span class="html-italic">Berlin</span> of EnMAP data with size 256 × 160. (<b>a</b>) result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>b</b>) result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>c</b>) result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>d</b>) result of Two-CNN-Fu; (<b>e</b>) RMSE map of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>f</b>) RMSE map of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>g</b>) RMSE map of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>h</b>) RMSE map of Two-CNN-Fu.</p>
Full article ">Figure 9
<p>The experiment data taken over Lafayette, (<b>a</b>) illustration of Hyperion and Sentinel-2A data, the red part is Hyperion data, the green part is Sentinel-2A data, the white part is the overlapped region, the yellow line indicates the study area; (<b>b</b>) color composite (bands 31, 21, 14) of Hyperion data in the study area with size 341 × 365; (<b>c</b>) color composite (bands 4, 3, 2) of Sentinel-2A data in the study area with size 1023 × 1095.</p>
Full article ">Figure 10
<p>False color composite (bands 45, 21, 14) of different Hyperion-Sentinel fusion results, size of the enhanced image is 1023 × 1095 with 10 m resolution, (<b>a</b>) result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>b</b>) result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>c</b>) result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>d</b>) result of Two-CNN-Fu.</p>
Full article ">Figure 11
<p>False color composite (bands 45, 21, 14) of the enlarged area in the blue box of <a href="#remotesensing-10-00800-f010" class="html-fig">Figure 10</a>, size of the area is 200 × 200, (<b>a</b>) the original 30 m Hyperion data; (<b>b</b>) fusion result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>c</b>) fusion result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>d</b>) fusion result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>e</b>) fusion result of Two-CNN-Fu.</p>
Full article ">Figure 12
<p>False color composite (bands 45, 21, 14) of the enlarged area in the yellow box of <a href="#remotesensing-10-00800-f010" class="html-fig">Figure 10</a>, size of the area is 200 × 200, (<b>a</b>) the original 30 m Hyperion data; (<b>b</b>) fusion result of SSR [<a href="#B12-remotesensing-10-00800" class="html-bibr">12</a>]; (<b>c</b>) fusion result of BayesSR [<a href="#B13-remotesensing-10-00800" class="html-bibr">13</a>]; (<b>d</b>) fusion result of CNMF [<a href="#B6-remotesensing-10-00800" class="html-bibr">6</a>]; (<b>e</b>) fusion result of Two-CNN-Fu.</p>
Full article ">Figure 13
<p>The ground truth labeled from each class.</p>
Full article ">Figure 14
<p>The classification map of Hyperion-Sentinel fusion result by Two-CNN-Fu method, the classifier is CCF.</p>
Full article ">Figure 15
<p>Evolution of loss function value over epochs of training.</p>
Full article ">Figure 16
<p>Evolution of fusion performance over epochs of fine-tuning, the testing data is EnMAP <span class="html-italic">Berlin</span> data, (<b>a</b>) the first and last FC layers are fine-tuned on EnMAP; (<b>b</b>) all of the three FC layers are fine-tuned on EnMAP.</p>
Full article ">Figure 17
<p>The kernels of different convolutional layers in MSI branch trained on EnMAP <span class="html-italic">Berlin</span> data, (<b>a</b>) kernels of the first layer; (<b>b</b>) kernels of the second layer; (<b>c</b>) kernels of the third layer.</p>
Full article ">Figure 18
<p>The extracted feature maps of different convolutional layers in MSI branch, the features are extracted from simulated MSI with size 128 × 128, the data is simulated from EnMAP <span class="html-italic">Berlin</span> data, (<b>a</b>) input MSI data; (<b>b</b>) feature maps of the first layer; (<b>c</b>) feature maps of the second layer; (<b>d</b>) feature maps of the third layer.</p>
Full article ">Figure 19
<p>The extracted features of different convolutional layers in HSI branch, the features are extracted from a spectrum with coordinate (76,185) of AVIRIS <span class="html-italic">Indian pines</span> data, (<b>a</b>) input spectrum of LR HIS; (<b>b</b>) features of the first layer; (<b>c</b>) features of the second layer; (<b>d</b>) features of the third layer.</p>
Full article ">
20 pages, 15120 KiB  
Article
Delineating Urban Boundaries Using Landsat 8 Multispectral Data and VIIRS Nighttime Light Data
by Xingyu Xue, Zhoulu Yu, Shaochun Zhu, Qiming Zheng, Melanie Weston, Ke Wang, Muye Gan and Hongwei Xu
Remote Sens. 2018, 10(5), 799; https://doi.org/10.3390/rs10050799 - 21 May 2018
Cited by 24 | Viewed by 6815
Abstract
Administering an urban boundary (UB) is increasingly important for curbing disorderly urban land expansion. The traditionally manual digitalization is time-consuming, and it is difficult to connect UB in the urban fringe due to the fragmented urban pattern in daytime data. Nighttime light (NTL) [...] Read more.
Administering an urban boundary (UB) is increasingly important for curbing disorderly urban land expansion. The traditionally manual digitalization is time-consuming, and it is difficult to connect UB in the urban fringe due to the fragmented urban pattern in daytime data. Nighttime light (NTL) data is a powerful tool used to map the urban extent, but both the blooming effect and the coarse spatial resolution make the urban product unable to meet the requirements of high-precision urban study. In this study, precise UB is extracted by a practical and effective method using NTL data and Landsat 8 data. Hangzhou, a megacity experiencing rapid urban sprawl, was selected to test the proposed method. Firstly, the rough UB was identified by the search mode of the concentric zones model (CZM) and the variance-based approach. Secondly, a buffer area was constructed to encompass the precise UB that is near the rough UB within a certain distance. Finally, the edge detection method was adopted to obtain the precise UB with a spatial resolution of 30 m. The experimental results show that a good performance was achieved and that it solved the largest disadvantage of the NTL data-blooming effect. The findings indicated that cities with a similar level of socio-economic status can be processed together when applied to larger-scale applications. Full article
(This article belongs to the Special Issue Remote Sensing of Night Lights – Beyond DMSP)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area.</p>
Full article ">Figure 2
<p>The framework proposed for this study.</p>
Full article ">Figure 3
<p>The schematic diagram showing the nighttime light (NTL) data (<b>a</b>) and the VANUI (<b>b</b>). The spatial resolutions of these are 390 m and 30 m, respectively.</p>
Full article ">Figure 4
<p>This diagram takes the study area with several municipal centers as an example to illustrate the principle of the concentric zone model (CZM). In our study, 728 concentric zones were established (interval = 0.1 nW/cm<sup>2</sup>/sr). Taking two concentric zones, for the example, the red part represents the concentric zone derived from the NTL digital number (DN) range between 72.8 (the highest NTL DN of the study area) and 30 nW/cm<sup>2</sup>/sr. The red and the yellow parts represent another concentric zone derived from the NTL DN range between 72.8 and 20 nW/cm<sup>2</sup>/sr.</p>
Full article ">Figure 5
<p>An intermediate result of our method which indicated the relationship between NTL intensity and variance between rural and urban areas. The variances were calculated from the sub-regions of the VANUI image, which were clipped by corresponding concentric zones. The concentric zones were derived from the corresponding NTL DN range. The 10–100 m range represented the different spatial resolutions of the VANUI images after resampling.</p>
Full article ">Figure 6
<p>(<b>a</b>) The VANUI image (multiplied by 1000), and (<b>b</b>) the rough UB and buffer area, the 27 July 2016 image of Landsat 8 OLI in <a href="#sec3dot1-remotesensing-10-00799" class="html-sec">Section 3.1</a> was selected as the background satellite image.</p>
Full article ">Figure 7
<p>The precise UB and the rough UB; the 27 July 2016 image of Landsat 8 OLI in <a href="#sec3dot1-remotesensing-10-00799" class="html-sec">Section 3.1</a> was selected as the background satellite image.</p>
Full article ">Figure 8
<p>The comparisons of the two UBs in the mixed zones between continuous vegetation and continuous impervious surfaces ((<b>a</b>,<b>b</b>), respectively). The comparisons of the two UBs in the mixed zones between the scattered and continuous impervious surfaces ((<b>c</b>,<b>d</b>), respectively). The boundary line in red is the manually-digitalized UB and the yellow boundary line is from the proposed method. The 27 July 2016 image of the Landsat 8 OLI in <a href="#sec3dot1-remotesensing-10-00799" class="html-sec">Section 3.1</a> was selected as the background satellite image. The descriptions of the manually-digitalized UB can be found in the Methodology section (<a href="#sec3dot6-remotesensing-10-00799" class="html-sec">Section 3.6</a>).</p>
Full article ">Figure 9
<p>The comparison between the median and Gaussian filter in removing the salt and pepper noise. (<b>a</b>) Landsat 8 OLI 30 m true color image; (<b>b</b>) the edge information derived from the Canny algorithm with a Guassian filter; and (<b>c</b>) the edge information formed by the Canny algorithm with a median filter. The VANUI image was selected as the background image.</p>
Full article ">Figure 10
<p>Taking two groups of samples as an example to illustrate the confusion among the VANUI values inside and outside the UB. Site 1 and site 2 were covered by impervious surfaces with lower NTL intensity, and site 3 and site 4 were vegetation- and water-filled, which reflected a large amount of NTL from nearby buildings. The VANUI DN of the first group of lower NTL values were confused with the second group of high NTL value. The VANUI image was selected as the background image.</p>
Full article ">
31 pages, 161658 KiB  
Article
Evolution and Controls of Large Glacial Lakes in the Nepal Himalaya
by Umesh K. Haritashya, Jeffrey S. Kargel, Dan H. Shugar, Gregory J. Leonard, Katherine Strattman, C. Scott Watson, David Shean, Stephan Harrison, Kyle T. Mandli and Dhananjay Regmi
Remote Sens. 2018, 10(5), 798; https://doi.org/10.3390/rs10050798 - 21 May 2018
Cited by 84 | Viewed by 13937
Abstract
Glacier recession driven by climate change produces glacial lakes, some of which are hazardous. Our study assesses the evolution of three of the most hazardous moraine-dammed proglacial lakes in the Nepal Himalaya—Imja, Lower Barun, and Thulagi. Imja Lake (up to 150 m deep; [...] Read more.
Glacier recession driven by climate change produces glacial lakes, some of which are hazardous. Our study assesses the evolution of three of the most hazardous moraine-dammed proglacial lakes in the Nepal Himalaya—Imja, Lower Barun, and Thulagi. Imja Lake (up to 150 m deep; 78.4 × 106 m3 volume; surveyed in October 2014) and Lower Barun Lake (205 m maximum observed depth; 112.3 × 106 m3 volume; surveyed in October 2015) are much deeper than previously measured, and their readily drainable volumes are slowly growing. Their surface areas have been increasing at an accelerating pace from a few small supraglacial lakes in the 1950s/1960s to 1.33 km2 and 1.79 km2 in 2017, respectively. In contrast, the surface area (0.89 km2) and volume of Thulagi lake (76 m maximum observed depth; 36.1 × 106 m3; surveyed in October 2017) has remained almost stable for about two decades. Analyses of changes in the moraine dams of the three lakes using digital elevation models (DEMs) quantifies the degradation of the dams due to the melting of their ice cores and hence their natural lowering rates as well as the potential for glacial lake outburst floods (GLOFs). We examined the likely future evolution of lake growth and hazard processes associated with lake instability, which suggests faster growth and increased hazard potential at Lower Barun lake. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of our study area. (<b>A</b>) GTOPO30 topographic hillshade showing Nepal and the surrounding region. Three studied lake locations are marked—Imja (red dot), Lower Barun (black dot), and Thulagi (green dot); Sentinel 2 satellite images (false color infrared bands 8, 4, 3) in insets (<b>B</b>) and (<b>C</b>) (19 November 2016), and (<b>D</b>) (22 November 2016) showing Imja, Lower Barun, and Thulagi Glacier and Lakes along with their tributaries and the nearby glacier.</p>
Full article ">Figure 2
<p>Vessel tracks collecting bathymetric data at each of these lake locations. Yellow dots in the middle figure (Lower Barun Lake) shows points where depth and location data were collected manually.</p>
Full article ">Figure 3
<p>Area change of Imja, Lower Barun, and Thulagi Glacial Lakes as monitored using Landsat satellite images from the early mid-1970s to 2017. Second- and third-order polynomial projections are fitted to each lake area data to extrapolate possible future growth up to 2025. Two different bars show errors relating to accuracy (large error bars) and precision (small bars from Equation (1)), as discussed in <a href="#sec3dot1-remotesensing-10-00798" class="html-sec">Section 3.1</a>. Data for the Imja Lake area in 1964 was obtained from Reference [<a href="#B75-remotesensing-10-00798" class="html-bibr">75</a>] with no available error estimates.</p>
Full article ">Figure 4
<p>Long-term evolution of lake perimeter with ~4–11-year intervals for (<b>A</b>) Imja Lake, (<b>B</b>) Lower Barun Lake, and (<b>C</b>) Thulagi Lake. See <a href="#app1-remotesensing-10-00798" class="html-app">Table S1</a> for full list of observations.</p>
Full article ">Figure 5
<p>Elevation change rates (m/year) for the three study sites from 2000 to 2014–2016.</p>
Full article ">Figure 6
<p>Ice-cored moraine.</p>
Full article ">Figure 7
<p>Normalized distribution showing a sky-view factor percentage. Blue tones represent relatively high topographic shielding of diffuse skylight irradiance and red tones indicate a full hemispherical view. A zoomed view of the terminus region (blue rectangle) stretched with a standard deviation of 1 to reflect the range of distribution and smooth texture at Thulagi due to the relative absence of supraglacial ponds and ice cliffs compared to the Imja and Lower Barun Glaciers.</p>
Full article ">Figure 8
<p>Displacement maps (m/year) and two-point boxcar average velocity measured along the centerline of (<b>a</b>–<b>c</b>) Imja and Lhotse Shar Glaciers, (<b>d</b>–<b>f</b>) Lower Barun Glacier, and (<b>g</b>–<b>i</b>) Thulagi Glacier. Headwall (HW), transition zone (TZ), Chamlang Glacier Confluence (CG), icefall (IF), icefall-1 (IF1), and icefall-2 (IF2) represent markers where a change in flow speed was observed.</p>
Full article ">Figure 9
<p>Smoothed bathymetric map of all three lakes. Inset shows the lake terminus area.</p>
Full article ">
18 pages, 6597 KiB  
Article
Automated Extraction of Surface Water Extent from Sentinel-1 Data
by Wenli Huang, Ben DeVries, Chengquan Huang, Megan W. Lang, John W. Jones, Irena F. Creed and Mark L. Carroll
Remote Sens. 2018, 10(5), 797; https://doi.org/10.3390/rs10050797 - 21 May 2018
Cited by 164 | Viewed by 15553
Abstract
Accurately quantifying surface water extent in wetlands is critical to understanding their role in ecosystem processes. However, current regional- to global-scale surface water products lack the spatial or temporal resolution necessary to characterize heterogeneous or variable wetlands. Here, we proposed a fully automatic [...] Read more.
Accurately quantifying surface water extent in wetlands is critical to understanding their role in ecosystem processes. However, current regional- to global-scale surface water products lack the spatial or temporal resolution necessary to characterize heterogeneous or variable wetlands. Here, we proposed a fully automatic classification tree approach to classify surface water extent using Sentinel-1 synthetic aperture radar (SAR) data and training datasets derived from prior class masks. Prior classes of water and non-water were generated from the Shuttle Radar Topography Mission (SRTM) water body dataset (SWBD) or composited dynamic surface water extent (cDSWE) class probabilities. Classification maps of water and non-water were derived over two distinct wetlandscapes: the Delmarva Peninsula and the Prairie Pothole Region. Overall classification accuracy ranged from 79% to 93% when compared to high-resolution images in the Prairie Pothole Region site. Using cDSWE class probabilities reduced omission errors among water bodies by 10% and commission errors among non-water class by 4% when compared with results generated by using the SWBD water mask. These findings indicate that including prior water masks that reflect the dynamics in surface water extent (i.e., cDSWE) is important for the accurate mapping of water bodies using SAR data. Full article
(This article belongs to the Special Issue Remote Sensing for Flood Mapping and Monitoring of Flood Dynamics)
Show Figures

Figure 1

Figure 1
<p>Map of the study sites, including the Prairie Pothole Region (PPR) and the Delmarva Peninsula (DMV), with Landsat path/row (purple solid line) and Sentinel-1 path/frame (blue solid line). National Agriculture Imagery Program (NAIP) images (right column) are given for each site, representing inland (<b>upper right</b>) and coastal (<b>bottom right</b>) wetlandscapes.</p>
Full article ">Figure 2
<p>Workflow for mapping surface water extent using Sentinel-1 SAR data.</p>
Full article ">Figure 3
<p>The Prairie Pothole Region site and footprints of remotely sensed data used in this study. The water and land class are from dynamic surface water extent (DSWE) composite class probabilities. Sentinel-1 data were collected on mid-night of 5 July 2016 (UTC) and 10 August 2016 (UTC). NAIP images were collected on the same day (noon to afternoon) of Sentinel-1 data.</p>
Full article ">Figure 4
<p>Land (<b>A</b>) and Water (<b>B</b>) class probabilities summarized from composited dynamic surface water extent (cDSWE) water/land classes and land/water classes derived from DSWE classes using a 95% threshold (<b>C</b>) and Shuttle Radar Topography Mission water body dataset (SWBD) water/land mask (<b>D</b>) over a site on the Delmarva Peninsula (see map). The zoom-in window (<b>a</b>–<b>d</b>) in the bottom-right shows the difference in spatial details in two products. These two prior masks were used to train and calibrate the surface water models.</p>
Full article ">Figure 5
<p>Box plot of polarized band, indices and geometry versus by land and water classes over Delmarva Peninsula. Red bar shows median and blue box represents first and third quantile. VVrVH = VV/VH, NDPI = (VV − VH)/(VV + VH)), NVHI = VH/(VV + VH), NVVI = (NVVI, VV/(VV + VH)), EIA = ellipsoid incidence angle, and LIA = local incidence angle.</p>
Full article ">Figure 6
<p>Gamma0_VV and Gamma0_VH (<b>left</b> column, (<b>A</b>,<b>D</b>)), density scatterplot (<b>middle</b> column, (<b>B</b>,<b>E</b>)), and binned scatterplot (<b>right</b> column, (<b>C</b>,<b>F</b>)) showing the separability between land and water classes defined by the SWBD in the Delmarva Peninsula. In the binned scatter plot, the backscatter coefficients (Gamma0_VV and Gamma0_VH in dB) are shown in cyan for water pixels and in red for land pixels. Grey bars represent 1 standard deviation.</p>
Full article ">Figure 7
<p>Random forest classification results over Prairie Pothole Region (PPR, <b>left</b> column, (<b>A</b>,<b>B</b>)) and Delmarva (DMV, <b>right</b> column, (<b>D</b>,<b>E</b>)) sites, using prior mask either from SWBD (<b>top</b> row) or composite DSWE (cDSWE) class probabilities (<b>second</b> row). The Sentinel-1 images (<b>third</b> row) were shown in false-color composited of Gamma naught (dB) (R: VV, G: VH, B: VHrVV), and were acquired on 10 August 2016 (<b>E</b>) and 9 July 2016 (<b>F</b>). The subzoom windows (bottom row) show small water bodies in PPR (left (<b>a</b>–<b>c</b>)) and linear streams (right (<b>d</b>–<b>f</b>)) missing from result using SWBD as prior mask. Difference in classification results between using SWBD and cDSWE were labeled in light to dark orange colors on the classification maps (<b>A</b>,<b>B</b>) and subzoom maps (<b>a</b>,<b>d</b>).</p>
Full article ">Figure 8
<p>Comparison of classification maps derived from near-coincident Sentinel-1 (<b>upper</b> row) and Landsat-8 DSWE (<b>bottom</b> row), over the sits of Prairie Pothole Region (PPR, <b>left</b> column) and the Delmarva Peninsula (DMV, <b>right</b> column). The two Sentinel-1 classification maps were generated using cDSWE as prior mask from Sentinel-1 data collected on (<b>B</b>) 9 July 2016 and (<b>D</b>) 10 August 2016. The two DSWE products were generated from Landsat-8 data collected on (<b>A</b>) 11 July 2016 and (<b>C</b>) 11 August 2016. Difference in classification results between this study and DSWE are labeled in light to dark orange colors on the classification maps from this study (<b>A</b>,<b>B</b>).</p>
Full article ">Figure 9
<p>Classification results based on Sentinel-1 SAR data from April to September 2016, using cDSWE probabilities to derive training data for the Prairie Pothole Region site. The zoom-in window (A) was selected for illustrating change patterns of the surface water extent and weather data in <a href="#remotesensing-10-00797-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 10
<p>Time series of Sentinel-1 derived percentage of surface water extent (%) and precipitation (mm) for a subset in the Prairie Pothole Region site. The percentage of water was calculated from the time series of classification maps over the given inset in <a href="#remotesensing-10-00797-f009" class="html-fig">Figure 9</a>. The daily precipitation data were collected by a nearby North Dakota Agricultural Weather Network (NDAWN) weather station in Robinson, ND [<a href="#B49-remotesensing-10-00797" class="html-bibr">49</a>].</p>
Full article ">
19 pages, 3387 KiB  
Article
Performance of Solar-Induced Chlorophyll Fluorescence in Estimating Water-Use Efficiency in a Temperate Forest
by Xiaoliang Lu, Zhunqiao Liu, Yuyu Zhou, Yaling Liu and Jianwu Tang
Remote Sens. 2018, 10(5), 796; https://doi.org/10.3390/rs10050796 - 20 May 2018
Cited by 5 | Viewed by 4551
Abstract
Water-use efficiency (WUE) is a critical variable describing the interrelationship between carbon uptake and water loss in land ecosystems. Different WUE formulations (WUEs) including intrinsic water use efficiency (WUEi), inherent water use efficiency (IWUE), and underlying water use efficiency (uWUE) have [...] Read more.
Water-use efficiency (WUE) is a critical variable describing the interrelationship between carbon uptake and water loss in land ecosystems. Different WUE formulations (WUEs) including intrinsic water use efficiency (WUEi), inherent water use efficiency (IWUE), and underlying water use efficiency (uWUE) have been proposed. Based on continuous measurements of carbon and water fluxes and solar-induced chlorophyll fluorescence (SIF) at a temperate forest, we analyze the correlations between SIF emission and the different WUEs at the canopy level by using linear regression (LR) and Gaussian processes regression (GPR) models. Overall, we find that SIF emission has a good potential to estimate IWUE and uWUE, especially when a combination of different SIF bands and a GPR model is used. At an hourly time step, canopy-level SIF emission can explain as high as 65% and 61% of the variances in IWUE and uWUE. Specifically, we find that (1) a daily time step by averaging hourly values during daytime can enhance the SIF-IWUE correlations, (2) the SIF-IWUE correlations decrease when photosynthetically active radiation and air temperature exceed their optimal biological thresholds, (3) a low Leaf Area Index (LAI) has a negative effect on the SIF-IWUE correlations due to large evaporation fluxes, (4) a high LAI in summer also reduces the SIF-IWUE correlations most likely due to increasing scattering and (re)absorption of the SIF signal, and (5) the observation time during the day has a strong impact on the SIF-IWUE correlations and SIF measurements in the early morning have the lowest power to estimate IWUE due to the large evaporation of dew. This study provides a new way to evaluate the stomatal regulation of plant-gas exchange without complex parameterizations. Full article
(This article belongs to the Special Issue Remote Sensing in Forest Hydrology)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The time series of hourly (<b>a</b>) water use efficiency (WUE, g C/kg H<sub>2</sub>O); (<b>b</b>) intrinsic water use efficiency (WUE<sub>i</sub>, umol C /mmol H<sub>2</sub>O); (<b>c</b>) inherent water use efficiency (IWUE, g C hPa /kg H<sub>2</sub>O) and (<b>d</b>) underlying water use efficiency (uWUE, g C hPa /kg H<sub>2</sub>O) at the Harvard Forest site in 2014. The red lines represent 4-day daytime averages. The values in the top <span class="html-italic">x</span>-axes represent months.</p>
Full article ">Figure 2
<p>The time series of hourly (<b>a</b>) SIF<sub>687</sub> (mW/m<sup>2</sup>/sr/nm), (<b>b</b>) SIF<sub>720</sub> (mW/m<sup>2</sup>/sr/nm) and (<b>c</b>) SIF<sub>761</sub> (mW/m<sup>2</sup>/sr/nm) at the Harvard Forest site in 2014. The red lines represent 4-day daytime averages. The values in the top <span class="html-italic">x</span>-axes represent months.</p>
Full article ">Figure 3
<p>The effect of vapor pressure deficit (VPD, kPa) on the coefficients of determination (<span class="html-italic">R</span><sup>2</sup>) between inherent water use efficiency (IWUE) and the selected single SIF bands and their combinations by using (<b>a</b>) linear regression analysis and (<b>b</b>) Gaussian processes regression, respectively. The blue, yellow, and red bars represent <span class="html-italic">R</span><sup>2</sup> for data with VPD ≤ 0.5 kPa, 0.5 kPa &lt; VPD ≤ 1.0 kPa, and VPD &gt; 1.0 kPa, respectively. All data have an hourly time resolution and belong to the period between 7 a.m. and 5 p.m. local time.</p>
Full article ">Figure 4
<p>The effect of intensity of Photosynthetically Active Radiation (PAR) on the coefficients of determination (<span class="html-italic">R</span><sup>2</sup>) between inherent water use efficiency (IWUE) and the selected single SIF bands and their combinations by using (<b>a</b>) linear regression analysis and (<b>b</b>) Gaussian processes regression, respectively. The blue/red bars represent <span class="html-italic">R</span><sup>2</sup> from data with PAR less/more than 1300 μmol photons/m<sup>2</sup>/s. All data have an hourly time resolution and belong to the period between 7 a.m. and 5 p.m. local time.</p>
Full article ">Figure 5
<p>The effect of air temperature (<span class="html-italic">T</span><sub>air</sub>, °C) on the coefficients of determination (<span class="html-italic">R</span><sup>2</sup>) between IWUE and the selected single SIF bands and their combinations by using (<b>a</b>) linear regression analysis and (<b>b</b>) Gaussian processes regression, respectively. The blue, green and yellow bars represent <span class="html-italic">R</span><sup>2</sup> from data with <span class="html-italic">T</span><sub>air</sub> ≤ 14 °C, 14 °C &lt; <span class="html-italic">T</span><sub>air</sub> &lt; 22 °C and <span class="html-italic">T</span><sub>air</sub> ≥ 22 °C, respectively. All data have an hourly time resolution and belong to the period between 7 a.m. and 5 p.m. local time.</p>
Full article ">Figure 6
<p>The effect of leaf area index (LAI) on the coefficients of determination (<span class="html-italic">R</span><sup>2</sup>) between inherent water use efficiency (IWUE) and the selected single SIF bands and their combinations by using (<b>a</b>) linear regression analysis and (<b>b</b>) Gaussian processes regression, respectively. The LAI ranges are included in the legends. All data have an hourly time resolution and belong to the period between 7 a.m. and 5 p.m. local time.</p>
Full article ">Figure 7
<p>The effect of diurnal observation time on the coefficients of determination (<span class="html-italic">R</span><sup>2</sup>) between inherent water use efficiency (IWUE) and the selected single SIF bands and their combinations by using (<b>a</b>) linear regression analysis and (<b>b</b>) Gaussian processes regression, respectively. All data have an hourly time resolution. The blue, green, and yellow bars represent <span class="html-italic">R</span><sup>2</sup> for data with observation time at 7–10 a.m., 10 a.m.–15 p.m., and 15–17 p.m., respectively.</p>
Full article ">
24 pages, 6153 KiB  
Article
Vertical Structure Anomalies of Oceanic Eddies and Eddy-Induced Transports in the South China Sea
by Wenjin Sun, Changming Dong, Wei Tan, Yu Liu, Yijun He and Jun Wang
Remote Sens. 2018, 10(5), 795; https://doi.org/10.3390/rs10050795 - 20 May 2018
Cited by 39 | Viewed by 7725
Abstract
Using satellite altimetry sea surface height anomalies (SSHA) and Argo profiles, we investigated eddy’s statistical characteristics, 3-D structures, eddy-induced physical parameter changes, and heat/freshwater transports in the South China Sea (SCS). In total, 31,744 cyclonic eddies (CEs, snapshot) and 29,324 anticyclonic eddies (AEs) [...] Read more.
Using satellite altimetry sea surface height anomalies (SSHA) and Argo profiles, we investigated eddy’s statistical characteristics, 3-D structures, eddy-induced physical parameter changes, and heat/freshwater transports in the South China Sea (SCS). In total, 31,744 cyclonic eddies (CEs, snapshot) and 29,324 anticyclonic eddies (AEs) were detected in the SCS between 1 January 2005 and 31 December 2016. The composite analysis has uncovered that changes in physical parameters modulated by eddies are mainly confined to the upper 400 m. The maximum change of temperature (T), salinity (S) and potential density (σθ) within the composite CE reaches −1.5 °C at about 70 m, 0.1 psu at about 50 m, and 0.5 kg m−3 at about 60 m, respectively. In contrast, the maximum change of T, S and σθ in the composite AE reaches 1.6 °C (about 110 m), −0.1 psu (about 70 m), and −0.5 kg m−3 (about 90 m), respectively. The maximum swirl velocity within the composite CE and AE reaches 0.3 m s−1. The zonal freshwater transport induced by CEs and AEs is (373.6 ± 9.7)×103 m3 s−1 and (384.2 ± 10.8)×103 m3 s−1, respectively, contributing up to (8.5 ± 0.2)% and (8.7 ± 0.2)% of the annual mean transport through the Luzon Strait. Full article
(This article belongs to the Section Ocean Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) Number of Argo profiles in each 1°×1° bin in the South China Sea (from 17 April 2006 to 31 December 2016). The dashed lines are the boundaries of the subregions. (<b>b</b>,<b>c</b>) The distribution of Argo profiles in the normalized eddy-coordinate space <math display="inline"><semantics> <mrow> <mo>(</mo> <mo>Δ</mo> <mi mathvariant="normal">X</mi> <mo>,</mo> <mo>Δ</mo> <mi mathvariant="normal">Y</mi> <mo>)</mo> </mrow> </semantics></math> associated with cyclonic and anticyclonic, respectively.</p>
Full article ">Figure 2
<p>The distribution of eddy numbers from 1 January 2005 to 31 December 2016 in each 1°×1° bin in the South China Sea. Unit is the occurrence number of eddy snapshot. Bins with fewer than 10 eddies are omitted.</p>
Full article ">Figure 3
<p>Monthly variation of the percentage of the eddy surface size compared to the whole size of the South China Sea (<b>blue line</b>) and the percentage of eddy number occupying all the eddies (<b>black line</b>). The standard errors are also shown in the figure.</p>
Full article ">Figure 4
<p>Time evolution of the mean eddy characteristic parameters with normalized lifespan: (<b>a</b>) normalized radius; (<b>b</b>) normalized eddy kinetic energy; (<b>c</b>) normalized vorticity; and (<b>d</b>) normalized eccentricity ratio. Each eddy’s age is normalized by its lifespan. Only eddies with lifespan longer than or equal to 30 days are included in the analysis. Blue and red lines indicate cyclonic and anticyclonic eddies, respectively. The standard errors are also shown in the figure.</p>
Full article ">Figure 5
<p>3-D structure of the: (<b>a</b>) composite cyclonic eddy; and (<b>b</b>) anticyclonic eddy. Colors represent potential density anomaly (unit: kg m<sup>−3</sup>), white vectors indicate geostrophic current anomaly (unit: m s<sup>−1</sup>), and grey outlines are the composite eddy boundary.</p>
Full article ">Figure 6
<p>(<b>a</b>) Vertical section of eddy swirl velocity in meridional component <math display="inline"><semantics> <mi>v</mi> </semantics></math> (unit: m s<sup>−1</sup>) of the composite cyclonic eddy center along <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="normal">Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. (<b>c</b>) Horizontal distribution of swirl velocity of the composite cyclonic eddy on the sea surface. The dashed line is the boundary of the composite cyclonic eddy, and the asterisk indicates the composite eddy center. (<b>b</b>,<b>d</b>) The same as (<b>a</b>,<b>c</b>), respectively, but for the composite anticyclonic eddy.</p>
Full article ">Figure 7
<p>(<b>a</b>) Vertical profiles of average temperature anomaly (unit: °C); (<b>b</b>) average salinity anomaly (unit: 1<sup>−1</sup> psu); and (<b>c</b>) average potential density anomaly (unit: 1<sup>−1</sup> kg m<sup>−3</sup>). Red (blue) solid curves are averages of all the available Argo profiles within anticyclonic (cyclonic) eddies, and green/cyan/magenta/black solid (dashed) curves are averages of all the available Argo profiles within cyclonic (anticyclonic) eddies in subregions Z1/Z2/Z3/Z4, respectively.</p>
Full article ">Figure 8
<p>Vertical sections of temperature anomaly (<b>left</b> column (<b>a,d</b>), unit: °C), salinity anomaly (<b>middle</b> column (<b>b,e</b>), unit: 1<sup>−</sup><sup>1</sup> psu), and potential density anomaly (<b>right</b> column (<b>c,f</b>), unit: kg m<sup>−3</sup>) across the composite eddy center along <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="normal">Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. Upper panels represent the composite cyclonic eddy, while the lower ones, the composite anticyclonic eddy.</p>
Full article ">Figure 9
<p>(<b>a</b>) Mixed layer depth anomaly modulated by composite cyclonic eddy which is averaged with 0.02 R around the composite eddy center after smoothing out local disturbances (unit: m). (<b>b</b>) The same as (<b>a</b>), but for composite anticyclonic eddy. (<b>c</b>) Mixed layer depth anomaly variation curve across the composite eddy center and along <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="normal">Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>. The blue and red curve denotes the composite cyclonic and composite anticyclonic eddy, respectively.</p>
Full article ">Figure 10
<p>Heat and freshwater transports induced by eddy movement in the South China Sea: (<b>a</b>) time-averaged zonal heat transport induced by eddy movement integrated in the meridional direction (unit: 10<sup>10</sup> W); (<b>b</b>) time-averaged meridional heat transport caused by eddy movement integrated in the zonal direction; and (<b>c</b>,<b>d</b>) the same as (<b>a</b>,<b>b</b>), respectively, but for the freshwater transport (unit: 10<sup>3</sup> m<sup>3</sup> s<sup>−1</sup>). The blue/red/black solid curves indicate the cyclonic/anticyclonic/total transport, respectively, which is calculated by the method of Dong et al. [<a href="#B13-remotesensing-10-00795" class="html-bibr">13</a>]. The blue/red/black dashed curves are the same as the solid curves, but for the method of Dong et al. [<a href="#B22-remotesensing-10-00795" class="html-bibr">22</a>]. The standard errors are also shown in the figure.</p>
Full article ">Figure 11
<p>Distribution of (<b>a</b>) zonal and (<b>b</b>) meridional heat transport induced by the cyclonic eddy in 1°×1° bin in the South China Sea (unit: 10<sup>10</sup> W Degree<sup>−1</sup>). (<b>c</b>,<b>d</b>) The same as (<b>a</b>,<b>b</b>), respectively, but for the anticyclonic eddy.</p>
Full article ">Figure 12
<p>The same as <a href="#remotesensing-10-00795-f011" class="html-fig">Figure 11</a>, but for freshwater transport.</p>
Full article ">Figure 13
<p>Vertical sections of buoyancy frequency squared anomaly (upper row, unit: 1<sup>−4</sup> s<sup>−2</sup>) and potential vorticity anomaly (lower row, unit: 1<sup>−10</sup> m<sup>−1</sup> s<sup>−1</sup>) across the composite eddy center along <math display="inline"><semantics> <mrow> <mo>Δ</mo> <mi mathvariant="normal">Y</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>: (<b>a</b>,<b>c</b>) the composite cyclonic eddy; and (<b>b</b>,<b>d</b>) the composite anticyclonic eddy.</p>
Full article ">Figure 14
<p>Vertical sections of each potential vorticity anomaly term (unit: 10<sup>−10</sup> m<sup>−1</sup> s<sup>−1</sup>) modulated by the composite eddy across the eddy center along ∆Y = 0: (<b>a</b>–<b>c</b>) the composite cyclonic eddy; and (<b>d</b>–<b>f</b>) the composite anticyclonic eddy.</p>
Full article ">
14 pages, 20077 KiB  
Article
Efficient Ground Surface Displacement Monitoring Using Sentinel-1 Data: Integrating Distributed Scatterers (DS) Identified Using Two-Sample t-Test with Persistent Scatterers (PS)
by Roghayeh Shamshiri, Hossein Nahavandchi, Mahdi Motagh and Andy Hooper
Remote Sens. 2018, 10(5), 794; https://doi.org/10.3390/rs10050794 - 19 May 2018
Cited by 37 | Viewed by 7162
Abstract
Combining persistent scatterers (PS) and distributed scatterers (DS) is important for effective displacement monitoring using time-series of SAR data. However, for large stacks of synthetic aperture radar (SAR) data, the DS analysis using existing algorithms becomes a time-consuming process. Moreover, the whole procedure [...] Read more.
Combining persistent scatterers (PS) and distributed scatterers (DS) is important for effective displacement monitoring using time-series of SAR data. However, for large stacks of synthetic aperture radar (SAR) data, the DS analysis using existing algorithms becomes a time-consuming process. Moreover, the whole procedure of DS selection should be repeated as soon as a new SAR acquisition is made, which is challenging considering the short repeat-observation of missions such as Sentinel-1. SqueeSAR is an approach for extracting signals from DS, which first applies a spatiotemporal filter on images and optimizes DS, then incorporates information from both optimized DS and PS points into interferometric SAR (InSAR) time-series analysis. In this study, we followed SqueeSAR and implemented a new approach for DS analysis using two-sample t-test to efficiently identify neighboring pixels with similar behaviour. We evaluated the performance of our approach on 50 Sentinel-1 images acquired over Trondheim in Norway between January 2015 and December 2016. A cross check on the number of the identified neighboring pixels using the Kolmogorov–Smirnov (KS) test, which is employed in the SqueeSAR approach, and the t-test shows that their results are strongly correlated. However, in comparison to KS-test, the t-test is less computationally intensive (98% faster). Moreover, the results obtained by applying the tests under different SAR stack sizes from 40 to 10 show that the t-test is less sensitive to the number of images. Full article
(This article belongs to the Special Issue Imaging Geodesy and Infrastructure Monitoring)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flow diagram of the implemented method in this study.</p>
Full article ">Figure 2
<p>One example for SHPs selection, showing: (<b>a</b>) all neighbors selected by applying the <span class="html-italic">t</span>-test; (<b>b</b>) labeling the eight connected pixels, which are shown with different colors; and (<b>c</b>) discarding those pixels with labels different from the label of the central pixel. The outline of the central pixel is shown with yellow.</p>
Full article ">Figure 3
<p>Sentinel-2 satellite image of the Trondheim study area. The white rectangle shows the outline of the Sentinel-1 data processed in this study.</p>
Full article ">Figure 4
<p>The PTA temporal coherence corresponding to the SHP map obtained by using the two-sample <span class="html-italic">t</span>-test</p>
Full article ">Figure 5
<p>Mean line-of-sight velocity maps considering: (<b>a</b>) only PS points; (<b>b</b>) PS and DS pixels derived by our new method using the two-sample <span class="html-italic">t</span>-test; and (<b>c</b>) PS and DS pixels identified by the two-sample KS-test. The triangles show the selected reference area. The vectors H and L represent the satellite heading and look angle. Negative implies away from satellite. For the point labeled A, the displacement time-series is shown in Figure 9a.BA indicates the location of the profile analyzed in Figure 9b. A zoomed in area around section BA from: (<b>d</b>) the PS result; and (<b>e</b>) the PS and DS pixels derived by the <span class="html-italic">t</span>-test.</p>
Full article ">Figure 6
<p>The number of SHP identified considering a <math display="inline"> <semantics> <mrow> <mn>15</mn> <mo>×</mo> <mn>21</mn> </mrow> </semantics> </math> estimation window and performing the two-sample (<b>a</b>) <span class="html-italic">t</span>-test and (<b>b</b>) KS-test, both at 95% significance level, and (<b>c</b>) the scatter plot of the number identified by the <span class="html-italic">t</span>-test versus the KS-test, color-coded by the smoothed density of pixels.</p>
Full article ">Figure 7
<p>(<b>a</b>) Amplitude image of the SLC on date 25 July 2015; and the filtered version using two-sample: (<b>b</b>) <span class="html-italic">t</span>-test; and (<b>c</b>) KS-test.</p>
Full article ">Figure 8
<p>(<b>a</b>) The line-of-sight displacement velocity derived using KS-test versus <span class="html-italic">t</span>-test, color-coded by the smoothed density of pixels; (<b>b</b>) the histogram of the difference between the velocities; and (<b>c</b>) the difference of the velocities versus the standard deviation, color-coded by the normalized smoothed density of pixels for each standard deviation bin with the width of 0.05. Dashed lines show region where absolute of the difference is less than twice the standard deviation.</p>
Full article ">Figure 9
<p>(<b>a</b>) The averaged line-of-sight (LOS) displacement time-series of the points within a circle of 50 m radius centered at the point A depicted in <a href="#remotesensing-10-00794-f005" class="html-fig">Figure 5</a>a; and (<b>b</b>) the profile of the LOS displacement rate along the line BA depicted in <a href="#remotesensing-10-00794-f005" class="html-fig">Figure 5</a>a.</p>
Full article ">Figure 10
<p>Correlation coefficient between the number of SHPs found for all pixels, for the full 50 images and for fewer images.</p>
Full article ">Figure 11
<p>An example of the SHPs (yellow points) identified by performing the two-sample KS-test, the two-sample <span class="html-italic">t</span>-test, and the one-sample <span class="html-italic">t</span>-test for the full 50 images and for fewer images. The central pixel is shown with red.</p>
Full article ">
19 pages, 5988 KiB  
Article
On the Desiccation of the South Aral Sea Observed from Spaceborne Missions
by Alka Singh, Ali Behrangi, Joshua B. Fisher and John T. Reager
Remote Sens. 2018, 10(5), 793; https://doi.org/10.3390/rs10050793 - 19 May 2018
Cited by 26 | Viewed by 6758
Abstract
The South Aral Sea has been massively affected by the implementation of a mega-irrigation project in the region, but ground-based observations have monitored the Sea poorly. This study is a comprehensive analysis of the mass balance of the South Aral Sea and its [...] Read more.
The South Aral Sea has been massively affected by the implementation of a mega-irrigation project in the region, but ground-based observations have monitored the Sea poorly. This study is a comprehensive analysis of the mass balance of the South Aral Sea and its basin, using multiple instruments from ground and space. We estimate lake volume, evaporation from the lake, and the Amu Darya streamflow into the lake using strengths offered by various remote-sensing data. We also diagnose the attribution behind the shrinking of the lake and its possible future fate. Terrestrial water storage (TWS) variations observed by the Gravity Recovery and Climate Experiment (GRACE) mission from the Aral Sea region can approximate water level of the East Aral Sea with good accuracy (1.8% normalized root mean square error (RMSE), and 0.9 correlation) against altimetry observations. Evaporation from the lake is back-calculated by integrating altimetry-based lake volume, in situ streamflow, and Global Precipitation Climatology Project (GPCP) precipitation. Different evapotranspiration (ET) products (Global Land Data Assimilation System (GLDAS), the Water Gap Hydrological Model (WGHM)), and Moderate-Resolution Imaging Spectroradiometer (MODIS) Global Evapotranspiration Project (MOD16) significantly underestimate the evaporation from the lake. However, another MODIS based Priestley-Taylor Jet Propulsion Laboratory (PT-JPL) ET estimate shows remarkably high consistency (0.76 correlation) with our estimate (based on the water-budget equation). Further, streamflow is approximated by integrating lake volume variation, PT-JPL ET, and GPCP datasets. In another approach, the deseasonalized GRACE signal from the Amu Darya basin was also found to approximate streamflow and predict extreme flow into the lake by one or two months. They can be used for water resource management in the Amu Darya delta. The spatiotemporal pattern in the Amu Darya basin shows that terrestrial water storage (TWS) in the central region (predominantly in the primary irrigation belt other than delta) has increased. This increase can be attributed to enhanced infiltration, as ET and vegetation index (i.e., normalized difference vegetation index (NDVI)) from the area has decreased. The additional infiltration might be an indication of worsening of the canal structures and leakage in the area. The study shows how altimetry, optical images, gravimetric and other ancillary observations can collectively help to study the desiccating Aral Sea and its basin. A similar method can be used to explore other desiccating lakes. Full article
(This article belongs to the Special Issue Satellite Altimetry for Earth Sciences)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area: the Amu Darya originates from the glaciers of Hindukush and Pamir and terminates into the South Aral Sea (blue polygon).</p>
Full article ">Figure 2
<p>Water level above mean sea level (<b>a</b>) historical annual Aral Sea data (<b>b</b>) altimetry-based monthly South Aral Sea observations.</p>
Full article ">Figure 3
<p>The East Aral Sea water level. (<b>a</b>) Best fit between de-seasonalized terrestrial water storage (TWS) and altimetry water level; and (<b>b</b>) altimetry water level observations compared with the derived water height from Landsat and Gravity Recovery and Climate Experiment (GRACE). The gray area shows the uncertainty range of the GRACE-based estimate, calculated by the ± root mean square error (RMSE).</p>
Full article ">Figure 4
<p>(<b>a</b>) the South Aral Sea volume variations and (<b>b</b>) the West Aral Sea bathymetry below 25 m MSL.</p>
Full article ">Figure 5
<p>Time series of the South Aral Sea precipitation and evaporation; (<b>a</b>) historical annual in-situ observations; (<b>b</b>) recent annual evapotranspiration (ET) and Global Precipitation Climatology Project (GPCP) observations; and (<b>c</b>) monthly precipitation observations over the lake.</p>
Full article ">Figure 6
<p>Evaporation from the South Aral Sea back-calculated evaporation (BCE) compared with the other ET products.</p>
Full article ">Figure 7
<p>The Amu Darya streamflow. (<b>a</b>) Deseasonalized GRACE signal from the Amu Darya basin (DGADB) compared with the in-situ streamflow. The red vertical lines are peaks of DGADB, and the blue vertical lines are peaks of in situ streamflow; (<b>b</b>) three monthly-smoothed Amu Darya streamflow observed by in situ data compared with the two derived estimates (R1 and R2).</p>
Full article ">Figure 8
<p>The Amu Darya basin: (<b>a</b>) GRACE TWS; (<b>b</b>) GPCP; (<b>c</b>) normalized difference vegetation index (NDVI); and (<b>d</b>) MODIS based ET (MOD16).</p>
Full article ">
19 pages, 14541 KiB  
Article
Remotely Sensing the Morphometrics and Dynamics of a Cold Region Dune Field Using Historical Aerial Photography and Airborne LiDAR Data
by Carson A. Baughman, Benjamin M. Jones, Karin L. Bodony, Daniel H. Mann, Chris F. Larsen, Emily Himelstoss and Jeremy Smith
Remote Sens. 2018, 10(5), 792; https://doi.org/10.3390/rs10050792 - 19 May 2018
Cited by 20 | Viewed by 5808
Abstract
This study uses an airborne Light Detection and Ranging (LiDAR) survey, historical aerial photography and historical climate data to describe the character and dynamics of the Nogahabara Sand Dunes, a sub-Arctic dune field in interior Alaska’s discontinuous permafrost zone. The Nogahabara Sand Dunes [...] Read more.
This study uses an airborne Light Detection and Ranging (LiDAR) survey, historical aerial photography and historical climate data to describe the character and dynamics of the Nogahabara Sand Dunes, a sub-Arctic dune field in interior Alaska’s discontinuous permafrost zone. The Nogahabara Sand Dunes consist of a 43-km2 area of active transverse and barchanoid dunes within a 3200-km2 area of vegetated dune and sand sheet deposits. The average dune height in the active portion of the dune field is 5.8 m, with a maximum dune height of 28 m. Dune spacing is variable with average crest-to-crest distances for select transects ranging from 66–132 m. Between 1952 and 2015, dunes migrated at an average rate of 0.52 m a−1. Dune movement was greatest between 1952 and 1978 (0.68 m a−1) and least between 1978 and 2015 (0.43 m a−1). Dunes migrated predominantly to the southeast; however, along the dune field margin, net migration was towards the edge of the dune field regardless of heading. Better constraining the processes controlling dune field dynamics at the Nogahabara dunes would provide information that can be used to model possible reactivation of more northerly dune fields and sand sheets in response to climate change, shifting fire regimes and permafrost thaw. Full article
(This article belongs to the Special Issue Remote Sensing of Dynamic Permafrost Regions)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Nogahabara dunes with respect to Alaska and other major inactive dune fields or sand sheets including Great Kobuk Sand Dunes (GKSD) and Little Kobuk Sand Dunes (LKSD). Permafrost extent based on Pastick et al., 2015. Aeolian deposits based on Karlstrom, 1964; Jorgenson et al., 2008.</p>
Full article ">Figure 2
<p>Active Nogahabara Sand Dune (brown region within the solid line) and surrounding inactive lobes (green regions within the broken line) addressed in this study. Yellow squares denote where dune migration rates were subsampled. Subsample area i41 is located on a rosette dune feature. All graphics overlie 2015 LiDAR-derived DEM.</p>
Full article ">Figure 3
<p>(<b>a</b>) Western portion of the Nogahabara Dune field where dunes are migrating to the southwest; (<b>b</b>) ground view of wind-affected sand, sparse vegetation and scattered trees representative of the Nogahabara sand dunes; (<b>c</b>) Oblique view of active and recently stabilized sand deposits.</p>
Full article ">Figure 4
<p>Configuration of the LiDAR scanner (silver), GPS (yellow) and camera (black, background) within the aircraft. LiDAR scanner and camera are pointing through portholes in the belly of the aircraft.</p>
Full article ">Figure 5
<p>Dune height derived from methods described in <a href="#sec2dot3-remotesensing-10-00792" class="html-sec">Section 2.3</a>. Transects a–e delineate elevation profiles depicted in <a href="#remotesensing-10-00792-f006" class="html-fig">Figure 6</a>. The inset image depicts results from automated dune ridge identification. The base image is the LiDAR-derived DEM.</p>
Full article ">Figure 6
<p>(<b>a</b>–<b>d</b>) The variety of mean dune spacing (T = transect length (m)/dune ridges (n)) from within the active dunes; (<b>e</b>) dune spacing within an inactive rosette dune feature. Red lines indicate dunes identified by the authors for each transect.</p>
Full article ">Figure 7
<p>Time series of dune migration within the a9 subsample area (see <a href="#remotesensing-10-00792-f002" class="html-fig">Figure 2</a>). At this location, dune migration is to the southeast (~140°). (<b>a</b>) The dunes as they appear in the 1952 imagery and the digitized slipface position (orange line); (<b>b</b>) the same region as seen in the 1978 imagery and the digitized slipface position (green line) relative to the 1952 position (orange line); (<b>c</b>) the same region as seen in the 2015 LiDAR DEM and the digitized slipface location (blue line) relative to the 1978 (green) and 1952 (orange) locations.</p>
Full article ">Figure 8
<p>Climate summaries for Galena and Huslia including (<b>a</b>) mean annual air temperature and (<b>b</b>) annual totals of freezing and thawing degree days. FDD, freezing degree days; TDD, thawing degree days.</p>
Full article ">Figure 9
<p>Wind roses summarizing seasonal patterns of winds in excess of 6.5 m/s for the three study periods. Winter captures January, February and December. Spring captures March, April and May. Summer captures June, July and August. Fall captures September, October and November. Station ID: WBAN-26501.</p>
Full article ">Figure 10
<p>Wind roses summarizing seasonal patterns of winds in excess of 6.5 m/s for Huslia, AK. 1994–2016. Station ID: WBAN-26552.</p>
Full article ">Figure 11
<p>(<b>a</b>) Oblique aerial view of Nogahabara Sand Dunes and surrounding forest; (<b>b</b>) blow-out feature within the previously inactive dune field following the 2015 fire.</p>
Full article ">
15 pages, 9929 KiB  
Article
Spatiotemporal Analysis of Landsat-8 and Sentinel-2 Data to Support Monitoring of Dryland Ecosystems
by Neal J. Pastick, Bruce K. Wylie and Zhuoting Wu
Remote Sens. 2018, 10(5), 791; https://doi.org/10.3390/rs10050791 - 19 May 2018
Cited by 47 | Viewed by 7997
Abstract
Drylands are the habitat and source of livelihood for about two fifths of the world’s population and are highly susceptible to climate and anthropogenic change. To understand the vulnerability of drylands to changing environmental conditions, land managers need to effectively monitor rates of [...] Read more.
Drylands are the habitat and source of livelihood for about two fifths of the world’s population and are highly susceptible to climate and anthropogenic change. To understand the vulnerability of drylands to changing environmental conditions, land managers need to effectively monitor rates of past change and remote sensing offers a cost-effective means to assess and manage these vast landscapes. Here, we present a novel approach to accurately monitor land-surface phenology in drylands of the Western United States using a regression tree modeling framework that combined information collected by the Operational Land Imager (OLI) onboard Landsat 8 and the Multispectral Instrument (MSI) onboard Sentinel-2. This highly-automatable approach allowed us to precisely characterize seasonal variations in spectral vegetation indices with substantial agreement between observed and predicted values (R2 = 0.98; Mean Absolute Error = 0.01). Derived phenology curves agreed with independent eMODIS phenological signatures of major land cover types (average r-value = 0.86), cheatgrass cover (average r-value = 0.96), and growing season proxies for vegetation productivity (R2 = 0.88), although a systematic bias towards earlier maturity and senescence indicates enhanced monitoring capabilities associated with the use of harmonized Landsat-8 Sentinel-2 data. Overall, our results demonstrate that observations made by the MSI and OLI can be used in conjunction to accurately characterize land-surface phenology and exclusion of imagery from either sensor drastically reduces our ability to monitor dryland environments. Given the declines in MODIS performance and forthcoming decommission with no equivalent replacement planned, data fusion approaches that integrate observations from multispectral sensors will be needed to effectively monitor dryland ecosystems. While the synthetic image stacks are expected to be locally useful, the technical approach can serve a wide variety of applications such as invasive species and drought monitoring, habitat mapping, production of phenology metrics, and land-cover change modeling. Full article
(This article belongs to the Special Issue Remote Sensing of Arid/Semiarid Lands)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Land cover information from the National Land Cover Database 2011 (NLCD; [<a href="#B21-remotesensing-10-00791" class="html-bibr">21</a>]) overlain by our study area (i.e., Harmonized Landsat-8 Sentinel-2 tile (11SQD)).</p>
Full article ">Figure 2
<p>Schematic diagram of overall workflow.</p>
Full article ">Figure 3
<p>(<b>Left</b>) Harmonized Landsat-8 Sentinel-2 (HLS) imagery (RGB; Bands 4, 3, and 2) overlaid by mask layers created using LaSRC (Top Middle [<a href="#B28-remotesensing-10-00791" class="html-bibr">28</a>]), FMask (Bottom Middle [<a href="#B27-remotesensing-10-00791" class="html-bibr">27</a>]); and (<b>Right</b>) decision tree models.</p>
Full article ">Figure 4
<p>Comparison of surface reflectance data from Harmonized Landsat-Sentinel-2 (HLS) products for one-day-apart acquisitions. Darker regions represent larger numbers of points (n = 50,000). Dashed-green line is 1-to-1 and the red line is the ordinary least squares (OLS) regression fit showing proportional bias.</p>
Full article ">Figure 5
<p>Time series of Harmonized Landsat-8 Sentinel-2 (HLS), predicted, and eMODIS Normalized Difference Vegetation Index (NDVI × 100) separated by major land cover classes [<a href="#B21-remotesensing-10-00791" class="html-bibr">21</a>]. Model<sub>LS</sub> was generated using both Landsat-8 and Sentinel-2A inputs, while Model<sub>L</sub> was generated using only Landsat-8 inputs. Each time series is calculated from all pixels (excluding masked pixels in the original HLS time series) representing largely homogenous areas (coefficient of variation &lt; 40%) and smoothed using a three-week sliding window approach. Timesteps where HLS data coverage was less than 70% are not shown.</p>
Full article ">Figure 6
<p>Time series of Harmonized Landsat-8 Sentinel-2 (HLS), predicted, and eMODIS Normalized Difference Vegetation Index (NDVI × 100) separated by percentage cheatgrass cover [<a href="#B22-remotesensing-10-00791" class="html-bibr">22</a>]. Model<sub>LS</sub> was generated using both Landsat-8 and Sentinel-2A inputs, while Model<sub>L</sub> was generated using only Landsat-8 inputs. Each time series was from all pixels (excluding masked areas in the original HLS time series) and smoothed using a three-week sliding window approach. Timesteps where HLS data coverage was less than 70% are not shown.</p>
Full article ">Figure 7
<p>eMODIS versus predicted (upscaled to 250-m spatial resolution using bilinear interpolation) growing season (April–September) Normalized Difference Vegetation Index (NDVI × 100) for randomly selected pixels (n ~ 10,000). Model<sub>LS</sub> was generated using both harmonized Landsat-8 and Sentinel-2A inputs, while Model<sub>L</sub> was generated using only harmonized Landsat-8 inputs. Darker regions represent larger numbers of sampled pixels. Dashed-green line is 1-to-1 and the red line is the ordinary-least squares regression fit weighted by the inverse of the coefficient of variation (CV) determined from a focal scan of the eMODIS Growing Season NDVI product (e.g., more weight to homogenous areas [larger points]).</p>
Full article ">Figure 8
<p>eMODIS versus predicted (upscaled to 250-m spatial resolution using bilinear interpolation) normalized difference vegetation index (NDVI × 100) for randomly selected pixels (n ~ 10,000) during Weeks 22 and 23. Model<sub>LS</sub> was generated using both harmonized Landsat-8 and Sentinel-2A inputs, while downscaled NDVI was generated using regression tree models with harmonized Landsat-8 and eMODIS inputs. Darker regions represent larger numbers of sampled points. Dashed-green line is 1-to-1 and the red line is the ordinary-least squares regression fit weighted by the inverse of the coefficient of variation (CV) determined from a focal scan of the eMODIS growing season NDVI product (e.g., more weight to homogenous areas [larger points]).</p>
Full article ">
21 pages, 4714 KiB  
Article
An Image Fusion Method Based on Image Segmentation for High-Resolution Remotely-Sensed Imagery
by Hui Li, Linhai Jing, Yunwei Tang and Liming Wang
Remote Sens. 2018, 10(5), 790; https://doi.org/10.3390/rs10050790 - 19 May 2018
Cited by 11 | Viewed by 5219
Abstract
Fusion of high spatial resolution (HSR) multispectral (MS) and panchromatic (PAN) images has become a research focus with the development of HSR remote sensing technology. In order to reduce the spectral distortions of fused images, current image fusion methods focus on optimizing the [...] Read more.
Fusion of high spatial resolution (HSR) multispectral (MS) and panchromatic (PAN) images has become a research focus with the development of HSR remote sensing technology. In order to reduce the spectral distortions of fused images, current image fusion methods focus on optimizing the approach used to extract spatial details from the PAN band, or on the optimization of the models employed during the injection of spatial details into the MS bands. Due to the resolution difference between the MS and PAN images, there is a large amount of mixed pixels (MPs) existing in the upsampled MS images. The fused versions of these MPs remain mixed, although they may correspond to pure PAN pixels. This is one of the reasons for spectral distortions of fusion products. However, few methods consider spectral distortions introduced by the mixed fused spectra of MPs. In this paper, an image fusion method based on image segmentation was proposed to improve the fused spectra of MPs. The MPs were identified and then fused to be as close as possible to the spectra of pure pixels, in order to reduce spectral distortions caused by fused MPs and improve the quality of fused products. A fusion experiment, using three HSR datasets recorded by WorldView-2, WorldView-3 and GeoEye-1, respectively, was implemented to compare the proposed method with several other state-of-the-art fusion methods, such as haze- and ratio-based (HR), adaptive Gram–Schmidt (GSA) and smoothing filter-based intensity modulation (SFIM). Fused products generated at the original and degraded scales were assessed using several widely-used quantitative quality indexes. Visual inspection was also employed to compare the fused images produced using the original datasets. It was demonstrated that the proposed method offers the lowest spectral distortions and more sharpened boundaries between different image objects than other methods, especially for boundaries between vegetation and non-vegetation objects. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow diagram for the proposed method. MS, multispectral.</p>
Full article ">Figure 2
<p>Flowchart of the edge, mark and fill (EMF) segmentation.</p>
Full article ">Figure 3
<p>The procedure of the identification of mixed pixels (MPs) within a segment. (<b>a</b>) edge pixels (in gray) of a segment. (<b>b</b>) neighbors (in orange) of the edge pixels shown in (<b>a</b>) within the same segment. (<b>c</b>) identified mixed pixels (in blue) in the segment.</p>
Full article ">Figure 4
<p>Identification of pure pixels in the neighborhood of an MP <span class="html-italic">p</span> within the segment <span class="html-italic">O</span><sub>2</sub>.</p>
Full article ">Figure 5
<p>The MS images of the three datasets used in the experiment; (<b>a</b>) WV-2 dataset; (<b>b</b>) WV-3 dataset; and (<b>c</b>) GE-1 dataset.</p>
Full article ">Figure 6
<p>The original and fused images for a 512 × 512 subset of the original WV-2 dataset; (<b>a</b>) 0.4-m PAN; (<b>b</b>) the upsampled version of 1.6-m MS; and fused images generated by the (<b>c</b>) HR-E, (<b>d</b>) HR, (<b>e</b>) GSA, (<b>f</b>) SFIM, (<b>g</b>) GLP-SDM, (<b>h</b>) AWLP and (<b>i</b>) ATWT methods.</p>
Full article ">Figure 7
<p>The original and fused images for a 512 × 512 subset of the original WV-3 dataset; (<b>a</b>) 0.4-m PAN; (<b>b</b>) the upsampled version of 1.6-m MS; and fused images generated by the (<b>c</b>) HR-E, (<b>d</b>) HR, (<b>e</b>) GSA, (<b>f</b>) SFIM, (<b>g</b>) GLP-SDM, (<b>h</b>) AWLP and (<b>i</b>) ATWT methods.</p>
Full article ">Figure 8
<p>The original and fused images for a 512 × 512 subset of the original GE-1 dataset; (<b>a</b>) 0.5-m PAN; (<b>b</b>) the upsampled version of 2-m MS; and fused images generated by the (<b>c</b>) HR-E, (<b>d</b>) HR, (<b>e</b>) GSA, (<b>f</b>) SFIM, (<b>g</b>) GLP-SDM, (h) AWLP and (<b>i</b>) ATWT methods.</p>
Full article ">Figure 9
<p>The QNR and <span class="html-italic">N</span><sub>MP</sub> of fused images for the three original datasets generated by the proposed method using different <span class="html-italic">T</span><sub>V</sub>, <span class="html-italic">T</span><sub>M</sub> and <span class="html-italic">T</span><sub>A</sub> values. (<b>a</b>) QNR and (<b>b</b>) <span class="html-italic">N</span><sub>MP</sub> of fused images of the proposed method using <span class="html-italic">T</span><sub>V</sub> values ranging between 0.1 and 0.5 with a step of 0.05; (<b>c</b>) QNR and (<b>d</b>) <span class="html-italic">N</span><sub>MP</sub> of fused images of the proposed method using <span class="html-italic">T</span><sub>M</sub> values ranging between 0.2 and 0.9 with a step of 0.1; (<b>e</b>) QNR and (<b>f</b>) <span class="html-italic">N</span><sub>MP</sub> of fused images of the proposed method using <span class="html-italic">T</span><sub>A</sub> values ranging from 10–100 with a step of 10.</p>
Full article ">Figure 10
<p>The QNR (<b>a</b>) and <span class="html-italic">N</span><sub>MP</sub> (<b>b</b>) of fused images for the three datasets generated using different <span class="html-italic">T</span><sub>V</sub> values, with a <span class="html-italic">T</span><sub>A</sub> value of 30.</p>
Full article ">
21 pages, 1962 KiB  
Article
Evaluation of a Bayesian Algorithm to Detect Burned Areas in the Canary Islands’ Dry Woodlands and Forests Ecoregion Using MODIS Data
by Francisco Guindos-Rojas, Manuel Arbelo, José R. García-Lázaro, José A. Moreno-Ruiz and Pedro A. Hernández-Leal
Remote Sens. 2018, 10(5), 789; https://doi.org/10.3390/rs10050789 - 19 May 2018
Cited by 10 | Viewed by 4499
Abstract
Burned Area (BA) is deemed as a primary variable to understand the Earth’s climate system. Satellite remote sensing data have allowed for the development of various burned area detection algorithms that have been globally applied to and assessed in diverse ecosystems, ranging from [...] Read more.
Burned Area (BA) is deemed as a primary variable to understand the Earth’s climate system. Satellite remote sensing data have allowed for the development of various burned area detection algorithms that have been globally applied to and assessed in diverse ecosystems, ranging from tropical to boreal. In this paper, we present a Bayesian algorithm (BY-MODIS) that detects burned areas in a time series of Moderate Resolution Imaging Spectroradiometer (MODIS) images from 2002 to 2012 of the Canary Islands’ dry woodlands and forests ecoregion (Spain). Based on daily image products MODIS, MOD09GQ (250 m), and MOD11A1 (1 km), the surface spectral reflectance and the land surface temperature, respectively, 10 day composites were built using the maximum temperature criterion. Variables used in BY-MODIS were the Global Environment Monitoring Index (GEMI) and Burn Boreal Forest Index (BBFI), alongside the NIR spectral band, all of which refer to the previous year and the year the fire took place in. Reference polygons for the 14 fires exceeding 100 hectares and identified within the period under analysis were developed using both post-fire LANDSAT images and official information from the forest fires national database by the Ministry of Agriculture and Fisheries, Food and Environment of Spain (MAPAMA). The results obtained by BY-MODIS can be compared to those by official burned area products, MCD45A1 and MCD64A1. Despite that the best overall results correspond to MCD64A1, BY-MODIS proved to be an alternative for burned area mapping in the Canary Islands, a region with a great topographic complexity and diverse types of ecosystems. The total burned area detected by the BY-MODIS classifier was 64.9% of the MAPAMA reference data, and 78.6% according to data obtained from the LANDSAT images, with the lowest average commission error (11%) out of the three products and a correlation (R2) of 0.82. The Bayesian algorithm—originally developed to detect burned areas in North American boreal forests using AVHRR archival data Long-Term Data Record—can be successfully applied to a lower latitude forest ecosystem totally different from the boreal ecosystem and using daily time series of satellite images from MODIS with a 250 m spatial resolution, as long as a set of training areas adequately characterising the dynamics of the forest canopy affected by the fire is defined. Full article
(This article belongs to the Special Issue Remote Sensing of Wildfire)
Show Figures

Figure 1

Figure 1
<p>The Canary Islands Archipelago (Spain) is located off the north-west African coast (Morocco). The image is a NDVI composition from the Terra-MODIS sensor at a 500-m spatial resolution on 30 December 2015 retrieved from the online Land, Atmosphere Near Real-time Capability for the EOS (LANCE) system (<a href="https://lance-modis.eosdis.nasa.gov/imagery/subsets/?area=af" target="_blank">https://lance-modis.eosdis.nasa.gov/imagery/subsets/?area=af</a>). The red rectangle includes five islands studied.</p>
Full article ">Figure 2
<p>Flowchart of the Bayesian algorithm applied to the study region to obtain annual maps of Burned Area.</p>
Full article ">Figure 3
<p>Annual distribution of the Burned Area estimate (ha) in the study region for the analysed products and the reference data.</p>
Full article ">Figure 4
<p>The 30 km × 30 km subscenes with fires greater than 2000 ha for the MODIS-based MCD45A1, MCD64A1, and BY-MODIS. The black lines represent the Landsat perimeter of all the fires registered on each subscene. The geographic coordinates indicate the top left and bottom right corners of each of the subscenes.</p>
Full article ">
16 pages, 5653 KiB  
Letter
SMAP Soil Moisture Change as an Indicator of Drought Conditions
by Rajasekaran Eswar, Narendra N. Das, Calvin Poulsen, Ali Behrangi, John Swigart, Mark Svoboda, Dara Entekhabi, Simon Yueh, Bradley Doorn and Jared Entin
Remote Sens. 2018, 10(5), 788; https://doi.org/10.3390/rs10050788 - 19 May 2018
Cited by 45 | Viewed by 7156
Abstract
Soil moisture is considered a key variable in drought analysis. The soil moisture dynamics given by the change in soil moisture between two time periods can provide information on the intensification or improvement of drought conditions. The aim of this work is to [...] Read more.
Soil moisture is considered a key variable in drought analysis. The soil moisture dynamics given by the change in soil moisture between two time periods can provide information on the intensification or improvement of drought conditions. The aim of this work is to analyze how the soil moisture dynamics respond to changes in drought conditions over multiple time intervals. The change in soil moisture estimated from the Soil Moisture Active Passive (SMAP) satellite observations was compared with the United States Drought Monitor (USDM) and the Standardized Precipitation Index (SPI) over the contiguous United States (CONUS). The results indicated that the soil moisture change over 13-week and 26-week intervals is able to capture the changes in drought intensity levels in the USDM, and the change over a four-week interval correlated well with the one-month SPI values. This suggested that a short-term negative soil moisture change may indicate a lack of precipitation, whereas a persistent long-term negative soil moisture change may indicate severe drought conditions. The results further indicate that the inclusion of soil moisture change will add more value to the existing drought-monitoring products. Full article
Show Figures

Figure 1

Figure 1
<p>The scatter plot between 13-week soil moisture changes and 13-week drought intensity changes (<b>a</b>) as observed; and (<b>b</b>) after averaging all of the soil moisture change values corresponding to each drought-intensity change. Similarly, (<b>c</b>,<b>d</b>) show the relationship between 13-week soil moisture change and the three-monthly Standardized Precipitation Index (SPI). These plots were created for a randomly selected grid (27.101°N, 81.224°W) in the contiguous United States (CONUS).</p>
Full article ">Figure 2
<p>Slope of the linear regression between the change in soil moisture and the change in drought intensity over CONUS at multiple time intervals: (<b>a</b>) four weeks, (<b>b</b>) 13 weeks, and (<b>c</b>) 26 weeks.</p>
Full article ">Figure 3
<p>Correlation coefficient of the linear regression between the change in soil moisture and the change in drought intensity over CONUS at multiple time intervals: (<b>a</b>) four weeks, (<b>b</b>) 13 weeks, and (<b>c</b>) 26 weeks.</p>
Full article ">Figure 4
<p>The relationship between 26-week soil moisture change and 26-week drought intensity change shown in form of (<b>a</b>,<b>c</b>) scatter plots and (<b>b</b>,<b>d</b>) a time series over two selected grids in the CONUS. The gaps in soil moisture in (<b>d</b>) are due to the freezing of the land surface during winter months.</p>
Full article ">Figure 5
<p>Time series of soil moisture and drought intensity changes over a grid in Florida (27.101°N, 81.224°W) at (<b>a</b>) four-week, (<b>b</b>) 13-week, and (<b>c</b>) 26-week time intervals. In the figure, DI indicates drought intensity.</p>
Full article ">Figure 6
<p>An example showing how the change in soil moisture responds to the change in drought conditions over the southern United States (USA). The United States Drought Monitor (USDM) maps are downloaded from the drought monitor website <a href="http://droughtmonitor.unl.edu" target="_blank">http://droughtmonitor.unl.edu</a>.</p>
Full article ">Figure 7
<p>Slope of the linear regression between the change in soil moisture and the Standardized Precipitation Index (SPI) over CONUS at multiple time intervals (<b>a</b>) four weeks, (<b>b</b>) 13 weeks, and (<b>c</b>) 26 weeks.</p>
Full article ">Figure 8
<p>Spatial distribution of the correlation between the change in soil moisture and SPI over CONUS at multiple time intervals (<b>a</b>) four weeks, (<b>b</b>) 13 weeks, and (<b>c</b>) 26 weeks.</p>
Full article ">Figure 9
<p>Time series of soil moisture change and SPI over a grid in Florida (27.101°N, 81.224°W). The plots (<b>a</b>–<b>c</b>) indicate the comparison of four-week, 13-week, and 26-week soil moisture change intervals with one, three, and six-monthly SPI intervals, respectively.</p>
Full article ">Figure 10
<p>The relationship between 26-week soil moisture change and 26-week drought intensity change, shown in form of (<b>a</b>) a scatter plot and (<b>b</b>) a time series over Arizona (32.288°N, 114.087°W).</p>
Full article ">Figure 11
<p>The relationship between 26-week soil moisture change and 26-week drought intensity change (plots (<b>a</b>,<b>b</b>)), and the relationship between 26-week soil moisture change and six-month SPI (plots (<b>c</b>,<b>d</b>)) over the central valley in California (37.077°N, 120.436°W).</p>
Full article ">Figure A1
<p>Location of different grid points used as examples in various figures in the main text. The Label of each grid point refers to the corresponding figure in the main text. (Image courtesy: Google Earth™).</p>
Full article ">
21 pages, 12732 KiB  
Article
Aerial and Ground Based Sensing of Tolerance to Beet Cyst Nematode in Sugar Beet
by Samuel Joalland, Claudio Screpanti, Hubert Vincent Varella, Marie Reuther, Mareike Schwind, Christian Lang, Achim Walter and Frank Liebisch
Remote Sens. 2018, 10(5), 787; https://doi.org/10.3390/rs10050787 - 19 May 2018
Cited by 42 | Viewed by 7180
Abstract
The rapid development of image-based phenotyping methods based on ground-operating devices or unmanned aerial vehicles (UAV) has increased our ability to evaluate traits of interest for crop breeding in the field. A field site infested with beet cyst nematode (BCN) and planted with [...] Read more.
The rapid development of image-based phenotyping methods based on ground-operating devices or unmanned aerial vehicles (UAV) has increased our ability to evaluate traits of interest for crop breeding in the field. A field site infested with beet cyst nematode (BCN) and planted with four nematode susceptible cultivars and five tolerant cultivars was investigated at different times during the growing season. We compared the ability of spectral, hyperspectral, canopy height- and temperature information derived from handheld and UAV-borne sensors to discriminate susceptible and tolerant cultivars and to predict the final sugar beet yield. Spectral indices (SIs) related to chlorophyll, nitrogen or water allowed differentiating nematode susceptible and tolerant cultivars (cultivar type) from the same genetic background (breeder). Discrimination between the cultivar types was easier at advanced stages when the nematode pressure was stronger and the plants and canopies further developed. The canopy height (CH) allowed differentiating cultivar type as well but was much more efficient from the UAV compared to manual field assessment. Canopy temperatures also allowed ranking cultivars according to their nematode tolerance level. Combinations of SIs in multivariate analysis and decision trees improved differentiation of cultivar type and classification of genetic background. Thereby, SIs and canopy temperature proved to be suitable proxies for sugar yield prediction. The spectral information derived from handheld and the UAV-borne sensor did not match perfectly, but both analysis procedures allowed for discrimination between susceptible and tolerant cultivars. This was possible due to successful detection of traits related to BCN tolerance like chlorophyll, nitrogen and water content, which were reduced in cultivars with a low tolerance to BCN. The high correlation between SIs and final sugar beet yield makes the UAV hyperspectral imaging approach very suitable to improve farming practice via maps of yield potential or diseases. Moreover, the study shows the high potential of multi- sensor and parameter combinations for plant phenotyping purposes, in particular for data from UAV-borne sensors that allow for standardized and automated high-throughput data extraction procedures. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Orthophoto of the experimental field extracted from the hyperspectral imager containing the experimental setup of the two investigated field trials (<b>A</b>); detail of the reflectance plates used for radiometric calibration as placed in the field (<b>B</b>); contour map of the initial BCN population density in the topsoil (0–30 cm) (<b>C</b>); The map was linearly interpolated from the 108 data points representing the sampled plots using the average of its neighbors for non-sampled plots.</p>
Full article ">Figure 2
<p>Average initial pi and pf BCN population in the soil for nine different cultivars. Error bars represent the standard error. Different letters behind pf/pi values indicate significant differences at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 3
<p>Beet fresh weight at harvest as a function of the nematode reproduction rate (pf/pi) in the soil for the four susceptible and five tolerant genotypes. Correlations were not significant.</p>
Full article ">Figure 4
<p>(<b>A</b>) Relationship between canopy height derived from the digital elevation model (CH<sub>DSM</sub>) and canopy measured by a ruler (CH<sub>ruler</sub>) 102 das (<span class="html-italic">n</span> = 104, <span class="html-italic">p</span> &lt; 0.01); (<b>B</b>) final sugar yield as a function of the CH<sub>DSM</sub> 152 das (<span class="html-italic">n</span> = 104, <span class="html-italic">p</span> &lt; 0.01); (<b>C</b>) average CH<sub>ruler</sub> of tolerant and susceptible cultivars 102 das; (<b>D</b>) average CH<sub>DSM</sub> of tolerant and susceptible cultivars 102 and 152 das. Error bars represent the standard error. Different lower case letters indicate significant differences between cultivar types at <span class="html-italic">p</span> &lt; 0.05.</p>
Full article ">Figure 5
<p>Average canopy temperatures of five tolerant and four susceptible cultivars 153 das. Error bars represent the standard error. Different lower case letters indicate significant differences between genotypes (<span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 6
<p>White sugar yield at harvest as a function of the canopy temperature 152 das at the (<b>A</b>) plot and (<b>B</b>) cultivar levels.</p>
Full article ">Figure 7
<p>(<b>A</b>) Coefficient of determination <span class="html-italic">R</span><sup>2</sup> for the relationship between SIs computed from the field spectrometer and from the UAV hyperspectral imager. * Significant correlations (<span class="html-italic">n</span> = 96, <span class="html-italic">p</span> &lt; 0.05); (<b>B</b>) relationship between CHLG index computed from field spectrometer and UAV hyperspectral imager at 102 (<span class="html-italic">p</span> &lt; 0.01) and 152 das (<span class="html-italic">p</span> &lt; 0.01).</p>
Full article ">Figure 8
<p>Principal component analysis (PCA) of the main phenotyping parameters 152 das. The percentage of variance explained by each component is displayed in parentheses. (<b>A</b>,<b>B</b>) display PCA using ground data (main field spectrometer SIs and canopy temperature); (<b>C</b>,<b>D</b>) show PCA using main SIs and canopy height extracted from the UAV. The susceptible and tolerant cultivars are depicted in blue and orange, respectively in graphs (<b>A</b>,<b>C</b>). The genetic background (breeder) are shown in different colors (graphs (<b>B</b>,<b>D</b>). Each data point represents one field plot (<span class="html-italic">n</span> = 96).</p>
Full article ">
25 pages, 2171 KiB  
Article
Machine Learning Regression Approaches for Colored Dissolved Organic Matter (CDOM) Retrieval with S2-MSI and S3-OLCI Simulated Data
by Ana Belen Ruescas, Martin Hieronymi, Gonzalo Mateo-Garcia, Sampsa Koponen, Kari Kallio and Gustau Camps-Valls
Remote Sens. 2018, 10(5), 786; https://doi.org/10.3390/rs10050786 - 19 May 2018
Cited by 71 | Viewed by 9047
Abstract
The colored dissolved organic matter (CDOM) variable is the standard measure of humic substance in waters optics. CDOM is optically characterized by its spectral absorption coefficient, a C D O M at at reference wavelength (e.g., ≈ 440 nm). Retrieval of CDOM is [...] Read more.
The colored dissolved organic matter (CDOM) variable is the standard measure of humic substance in waters optics. CDOM is optically characterized by its spectral absorption coefficient, a C D O M at at reference wavelength (e.g., ≈ 440 nm). Retrieval of CDOM is traditionally done using bio-optical models. As an alternative, this paper presents a comparison of five machine learning methods applied to Sentinel-2 and Sentinel-3 simulated reflectance ( R r s ) data for the retrieval of CDOM: regularized linear regression (RLR), random forest regression (RFR), kernel ridge regression (KRR), Gaussian process regression (GPR) and support vector machines (SVR). Two different datasets of radiative transfer simulations are used for the development and training of the machine learning regression approaches. Statistics comparison with well-established polynomial regression algorithms shows optimistic results for all models and band combinations, highlighting the good performance of the methods, especially the GPR approach, when all bands are used as input. Application to an atmospheric corrected OLCI image using the reflectance derived form the alternative neural network (Case 2 Regional) is also shown. Python scripts and notebooks are provided to interested users. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Colour)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Polynomial regressions using the simulated Sentinel-2 Multi-Spectral Instrument(S2-MSI) and Sentinel-3 Ocean and Land Colour Instrument (S3-OLCI) configuration datasets.</p>
Full article ">Figure 2
<p>Box-plots of the residuals <span class="html-italic">S2 two ratios</span> and <span class="html-italic">S2 All bands + ratios.</span> On each box, the central mark is the median, the edges of the box are the lower hinge (defined as the 25th percentile) and the upper hinge (the 75th percentile), the whiskers extend to the most extreme data points not considered outliers.</p>
Full article ">Figure 3
<p>Comparison of the performance of the models using the linear regression representation, S2-MSI. On the y axis are the normalized CDOM values, on the x axis the value of the ratios.</p>
Full article ">Figure 4
<p>Permutation plots for four ML methods for the S2-MSI configuration.</p>
Full article ">Figure 5
<p>Box-plots of the residuals for <span class="html-italic">S3 two ratios</span> and <span class="html-italic">S3 All bands + ratios</span>.</p>
Full article ">Figure 6
<p>Comparison of the performance of the models using the linear regression representation, S3-OLCI.</p>
Full article ">Figure 7
<p>Permutation plots for four ML methods for the S3-OLCI configuration.</p>
Full article ">Figure 8
<p>Ranges and mean of reflectance spectra at S3-OLCI wavebands from the C2X dataset for C2A and C2AX subsets. The utilized S2-MSI and S3-OLCI bands are highlighted for convenience.</p>
Full article ">Figure 9
<p>Statistic box-plots of the residual errors on the top; permutation plots of the RLR and GPR model on the bottom—C2X S3-OLCI.</p>
Full article ">Figure 10
<p>Retrieval performance with respect to the C2A(X) CDOM simulations: on the top left the scatter plot of the Polyfit method with the <span class="html-italic">Ratio1</span>; on the top right the scatter plot of the Polyfit method with the <span class="html-italic">Ratio2</span>. On the bottom left RLR method using all available bands and the two ratios as input; on the bottom right the GPR method with all available bands and the two ratios.</p>
Full article ">Figure 11
<p>Absorption coefficient of CDOM at 440 nm retrieved by ONNS (IOP NNs) for the C2A (red dots) and C2AX (blue dots) spectral types. Plot is in log10 scale.</p>
Full article ">Figure 12
<p>Computational time in seconds of each model and band combination on an standard OLCI scene.</p>
Full article ">Figure 13
<p>Comparison of ADG443_NN vs. GPR model (m<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>): left ADG443_NN product; right GPR CDOM output.</p>
Full article ">Figure 14
<p>Comparison of ADG443_NN uncertainties vs. the uncertainties of the GPR model (m<math display="inline"><semantics> <msup> <mrow/> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </semantics></math>): left ADG443 uncertainties; right GPR CDOM uncertainties.</p>
Full article ">
27 pages, 4966 KiB  
Article
Remotely Sensing the Biophysical Drivers of Sardinella aurita Variability in Ivorian Waters
by Jean-Baptiste Kassi, Marie-Fanny Racault, Brice A. Mobio, Trevor Platt, Shubha Sathyendranath, Dionysios E. Raitsos and Kouadio Affian
Remote Sens. 2018, 10(5), 785; https://doi.org/10.3390/rs10050785 - 18 May 2018
Cited by 15 | Viewed by 6368
Abstract
The coastal regions of the Gulf of Guinea constitute one of the major marine ecosystems, producing essential living marine resources for the populations of Western Africa. In this region, the Ivorian continental shelf is under pressure from various anthropogenic sources, which have put [...] Read more.
The coastal regions of the Gulf of Guinea constitute one of the major marine ecosystems, producing essential living marine resources for the populations of Western Africa. In this region, the Ivorian continental shelf is under pressure from various anthropogenic sources, which have put the regional fish stocks, especially Sardinella aurita, the dominant pelagic species in Ivorian industrial fishery landings, under threat from overfishing. Here, we combine in situ observations of Sardinella aurita catch, temperature, and nutrient profiles, with remote-sensing ocean-color observations, and reanalysis data of wind and sea surface temperature, to investigate relationships between Sardinella aurita catch and oceanic primary producers (including biomass and phenology of phytoplankton), and between Sardinella aurita catch and environmental conditions (including upwelling index, and turbulent mixing). We show that variations in Sardinella aurita catch in the following year may be predicted, with a confidence of 78%, based on a bilinear model using only physical variables, and with a confidence of 40% when using only biological variables. However, the physics-based model alone is not sufficient to explain the mechanism driving the year-to-year variations in Sardinella aurita catch. Based on the analysis of the relationships between biological variables, we demonstrate that in the Ivorian continental shelf, during the study period 1998–2014, population dynamics of Sardinella aurita, and oceanic primary producers, may be controlled, mainly by top-down trophic interactions. Finally, based on the predictive models constructed here, we discuss how they can provide powerful tools to support evaluation and monitoring of fishing activity, which may help towards the development of a Fisheries Information and Management System. Full article
(This article belongs to the Special Issue Remote Sensing of Ocean Colour)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Comparison of <span class="html-italic">Sardinella aurita</span> landings (tons) in the fishing zone of Abidjan produced by the Ministry of Production of Animals (MPA) at the Abidjan Fisheries Direction with the corresponding landings for all Ivorian waters produced by the Food and Agriculture Organization (FAO) for the period 1997–2014.</p>
Full article ">Figure 2
<p>Spatial and seasonal variations of (<b>a</b>) Chlorophyll (mg m<sup>−3</sup>), (<b>b</b>) SST (°C), (<b>c</b>) upwelling index (m<sup>3</sup> s<sup>−1</sup> km<sup>−1</sup>), and (<b>d</b>) wind-induced turbulent mixing (m<sup>3</sup> s<sup>−3</sup>). Left panel: mean for 1998–2014. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan. Right panel: box area climatological monthly means for 1998–2014 with standard error (gray shading).</p>
Full article ">Figure 3
<p>Seasonal variations of (<b>a</b>) thermocline depth (m), (<b>b</b>) surface temperature (°C), (<b>c</b>) nitracline depth (m), (<b>d</b>) nitracline depth-integrated nitrate concentration (mol m<sup>−2</sup>), (<b>e</b>) phosphatocline depth (m), and (<b>f</b>) phosphatocline depth-integrated phosphate concentration (mol m<sup>−2</sup>). Left panel: blue line is upper bound of the layer and turquoise line is lower bound of the layer. All observations are based on in situ measurements collected in Abidjan fishing zone at the point of coordinate 5.5°W, 4.5°N. Water column oceanographic profiles are from the World Ocean Atlas 2013 [<a href="#B56-remotesensing-10-00785" class="html-bibr">56</a>].</p>
Full article ">Figure 4
<p>Maps of relative anomalies in annual mean chlorophyll concentration expressed in percentage in Ivorian waters. Chlorophyll concentration anomalies are calculated using the Ocean Color Climate Change Initiative (OC-CCI) product for the period 1998 to 2014. Blue (red) color indicates lower (higher) chlorophyll concentration relative to the 17 year mean. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure 5
<p>Maps of anomalies of timing of initiation (8-Day period) of the phytoplankton growth in Ivorian waters. The timing of initiation is calculated using the OC-CCI chlorophyll product and applying the threshold method presented in Racault et al. [<a href="#B57-remotesensing-10-00785" class="html-bibr">57</a>]. Anomalies are estimated for the period 1998 to 2014. Blue (red) color indicates earlier (later) phytoplankton growth initiation compared with the 17 year mean. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure 6
<p>Interannual variability during the period 1998 to 2014 in (<b>a</b>) <span class="html-italic">Sardinella aurita</span> catch (tons) in year <span class="html-italic">t +</span> 1, and timing of initiation (weeks) in year <span class="html-italic">t</span>; and (<b>b</b>) <span class="html-italic">Sardinella aurita</span> catch (tons) in year <span class="html-italic">t</span> and chlorophyll concentration (mg m<sup>−3</sup>) in year <span class="html-italic">t</span>. Note that in (<b>a</b>), the relationship is shown for phytoplankton bloom timing in year <span class="html-italic">t</span> and <span class="html-italic">S. aurita</span> catch in year <span class="html-italic">t +</span> 1, as we are investigating the influence of the bloom timing (food availability) on the <span class="html-italic">S. aurita</span> recruitment success, which is shown in the catch success in following year. On the other hand, in (<b>b</b>), the relationship is shown for chlorophyll concentration and <span class="html-italic">S. aurita</span> catch both in the same year, <span class="html-italic">t</span>, as we are investigating grazing pressure of <span class="html-italic">S. aurita</span> on chlorophyll concentration in the same year. All data were averaged over the area of the fishing zone of Abidjan (please see box-area location in <a href="#remotesensing-10-00785-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 7
<p>Maps of upwelling index anomalies (m<sup>3</sup> s<sup>−1</sup> km<sup>−1</sup>). Anomalies are estimated for the period 1998 to 2014. Blue (red) color indicates stronger (weaker) upwelling conditions compared to the 17 year mean. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure 8
<p>Maps of wind-induced turbulent mixing anomalies (m<sup>3</sup> s<sup>−3</sup>). Anomalies are estimated for the period 1998 to 2014. Blue (red) color indicates stronger (weaker) turbulent conditions compared with the 17 year mean. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure 9
<p>Interannual variability during the period 1998 to 2014 in (<b>a</b>) <span class="html-italic">Sardinella aurita</span> catch (tons) in year <span class="html-italic">t +</span> 1 and upwelling index (annual mean in m<sup>3</sup> s<sup>−1</sup> km<sup>−1</sup>) in year <span class="html-italic">t</span>; and (<b>b</b>) <span class="html-italic">Sardinella aurita</span> catch (in tons) in year <span class="html-italic">t +</span> 1 and wind-induced turbulent mixing (mean June–December in m<sup>3</sup> s<sup>−3</sup>) in year <span class="html-italic">t</span>. Note that in (a), the relationship is shown for upwelling index and wind-induced turbulent mixing in year <span class="html-italic">t</span>, and <span class="html-italic">S. aurita</span> catch in year <span class="html-italic">t +</span> 1, as we are investigating the influence of the ocean physics on the <span class="html-italic">S. aurita</span> recruitment success, which is shown in the catch success in following year. All data were averaged over the area of the fishing zone of Abidjan (please see box-area location in <a href="#remotesensing-10-00785-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 10
<p>Plots of empirical relationships between interannual variations in <span class="html-italic">Sardinella aurita</span> catch, phytoplankton chlorophyll, phytoplankton timing of initiation, turbulent mixing, and upwelling index. Annual mean chlorophyll concentration in mg m<sup>−3</sup>; timing of initiation of phytoplankton growth in days; June to December mean wind-induced turbulent mixing in m<sup>3</sup> s<sup>−3</sup>; and annual mean upwelling index in m<sup>3</sup> s<sup>−1</sup> km<sup>−1</sup>. Note that in (<b>a</b>), the relationship is shown for chlorophyll concentration and <span class="html-italic">S. aurita</span> catch both in the same year <span class="html-italic">t</span>, as we are investigating grazing pressure on chlorophyll concentration in that year. Similarly, in (<b>c</b>–<b>f</b>) the relationships are shown for variables in the year <span class="html-italic">t</span>, as we are investigating the influence of ocean physics on the concentration and timing of the first trophic level in the same year, and in (<b>i</b>,<b>j</b>), as we are investigating autocorrelation between physical variables and between biological variables. However, in (<b>b</b>,<b>g</b>,<b>h</b>), the relationship is shown for biophysical variables in year <span class="html-italic">t</span> and <span class="html-italic">S. aurita</span> catch in year <span class="html-italic">t +</span> 1, as we are investigating the influence of the biophysical variables on the <span class="html-italic">S. aurita</span> catch success in following year. All data were averaged over the area of the fishing zone of Abidjan (please see box-area location in <a href="#remotesensing-10-00785-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 11
<p>Comprehensive assessment of predictive performance of diagnostic models using all possible combinations of input variables. (<b>a</b>–<b>o</b>) Regression analyses between observed <span class="html-italic">Sardinella aurita</span> catch and predicted <span class="html-italic">Sardinella aurita</span> catch based on combinations of four predictive variables, i.e., anomalies of timing of initiation, annual mean chlorophyll concentration, annual mean upwelling index, and June to December mean wind-induced turbulent mixing in year <span class="html-italic">t</span>. The regression statistics are reported using adjusted <span class="html-italic">R</span><sup>2</sup>, which increases only if added variables are relevant ones. Significances are calculated using <span class="html-italic">F</span>-test. All data were averaged over the area of the fishing zone of Abidjan (please see box-area location in <a href="#remotesensing-10-00785-f002" class="html-fig">Figure 2</a>).</p>
Full article ">Figure 12
<p>Conceptual diagram for the year-to-year variations in <span class="html-italic">Sardinella aurita</span> catch in relation to first trophic level and ocean physics along the Ivorian continental shelf. (<b>a</b>) Year when <span class="html-italic">Sardinella</span> catch is high; (<b>b</b>) Year when <span class="html-italic">Sardinella</span> catch is low. Lowercase letters <b>a</b>–<b>j</b> refer to the empirical relationships presented in <a href="#remotesensing-10-00785-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure A1
<p>Maps of annual mean chlorophyll concentration in mg Chl m<sup>−3</sup> for the years 1998 to 2014. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure A2
<p>Maps of timing of initiation of phytoplankton growth in days for the years 1998 to 2014. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure A3
<p>Maps of annual mean upwelling index in m<sup>3</sup> s<sup>−1</sup> m<sup>−1</sup> for the years 1998 to 2014. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">Figure A4
<p>Maps of annual mean wind-induced turbulent mixing in m<sup>3</sup> s<sup>−3</sup> for the years 1998 to 2014. The boxed area against the coast indicates the geographical location of the fishing zone of Abidjan.</p>
Full article ">
18 pages, 3355 KiB  
Article
Hyperspectral Measurement of Seasonal Variation in the Coverage and Impacts of an Invasive Grass in an Experimental Setting
by Yuxi Guo, Sarah Graves, S. Luke Flory and Stephanie Bohlman
Remote Sens. 2018, 10(5), 784; https://doi.org/10.3390/rs10050784 - 18 May 2018
Cited by 6 | Viewed by 4643
Abstract
Hyperspectral remote sensing can be a powerful tool for detecting invasive species and their impact across large spatial scales. However, remote sensing studies of invasives rarely occur across multiple seasons, although the properties of invasives often change seasonally. This may limit the detection [...] Read more.
Hyperspectral remote sensing can be a powerful tool for detecting invasive species and their impact across large spatial scales. However, remote sensing studies of invasives rarely occur across multiple seasons, although the properties of invasives often change seasonally. This may limit the detection of invasives using remote sensing through time. We evaluated the ability of hyperspectral measurements to quantify the coverage of a plant invader and its impact on senesced plant coverage and canopy equivalent water thickness (EWT) across seasons. A portable spectroradiometer was used to collect data in a field experiment where uninvaded plant communities were experimentally invaded by cogongrass, a non-native perennial grass, or maintained as an uninvaded reference. Vegetation canopy characteristics, including senesced plant material, the ratio of live to senesced plants, and canopy EWT varied across the seasons and showed different temporal patterns between the invaded and reference plots. Partial least square regression (PLSR) models based on a single season had a limited predictive ability for data from a different season. Models trained with data from multiple seasons successfully predicted invasive plant coverage and vegetation characteristics across multiple seasons and years. Our results suggest that if seasonal variation is accounted for, the hyperspectral measurement of invaders and their effects on uninvaded vegetation may be scaled up to quantify effects at landscape scales using airborne imaging spectrometers. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Box plots showing the following vegetation characteristics in ambient (non-drought) subplots for uninvaded and cogongrass-invaded subplots through time: (<b>a</b>) cogongrass coverage, (<b>b</b>) dead plant coverage, (<b>c</b>) live-to-dead material ratio, and (<b>d</b>) canopy equivalent water thickness (EWT). On the <span class="html-italic">x</span>-axis, the sample periods from the wet season are in bold and italics. For figure (<b>a</b>), the cogongrass coverage for the uninvaded subplots in all of the sampling periods is zero. Data ranking was implemented by the Kruskal–Wallis test. Small letters denote statistical significance of Dunn’s post hoc multiple comparisons among treatments.</p>
Full article ">Figure 2
<p>Mean hyperspectral reflectance for different quantiles of the following vegetation characteristics: (<b>a</b>) cogongrass coverage, (<b>b</b>) dead plant coverage, (<b>c</b>) live-to-dead ratio, and (<b>d</b>) canopy equivalent water thickness (EWT) for the full spectral range. Data were combined from all of the sampling periods.</p>
Full article ">Figure 3
<p>The results of PLSR models for cogongrass coverage, dead plant coverage, the live-to-dead ratio, and canopy EWT with training and testing data drawn from all of the dates combined for calibration (<b>a</b>) and independent validation (<b>b</b>). The black lines are 1:1 lines. <math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> = coefficient of determination, a<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> = adjusted coefficient of determination. EWT = equivalent water thickness.</p>
Full article ">Figure 4
<p>The PLSR model regression coefficients plotted by wavelength for four variables: cogongrass cover (<b>a</b>), dead plot cover (<b>b</b>), live-to-dead ratio (<b>c</b>), and canopy equivalent water thickness (EWT) (<b>d</b>). The wavelength center of water band index (WBI) in red, normalized difference water index (NDWI) in yellow [<a href="#B49-remotesensing-10-00784" class="html-bibr">49</a>], normalized difference infrared index (NDII) in blue [<a href="#B50-remotesensing-10-00784" class="html-bibr">50</a>], and normalized difference vegetation index (NDVI) in green [<a href="#B51-remotesensing-10-00784" class="html-bibr">51</a>] are presented as the vertical lines as references.</p>
Full article ">
18 pages, 3202 KiB  
Article
Deep Cube-Pair Network for Hyperspectral Imagery Classification
by Wei Wei, Jinyang Zhang, Lei Zhang, Chunna Tian and Yanning Zhang
Remote Sens. 2018, 10(5), 783; https://doi.org/10.3390/rs10050783 - 18 May 2018
Cited by 31 | Viewed by 7327
Abstract
Advanced classification methods, which can fully utilize the 3D characteristic of hyperspectral image (HSI) and generalize well to the test data given only limited labeled training samples (i.e., small training dataset), have long been the research objective for HSI classification problem. Witnessing the [...] Read more.
Advanced classification methods, which can fully utilize the 3D characteristic of hyperspectral image (HSI) and generalize well to the test data given only limited labeled training samples (i.e., small training dataset), have long been the research objective for HSI classification problem. Witnessing the success of deep-learning-based methods, a cube-pair-based convolutional neural networks (CNN) classification architecture is proposed to cope this objective in this study, where cube-pair is used to address the small training dataset problem as well as preserve the 3D local structure of HSI data. Within this architecture, a 3D fully convolutional network is further modeled, which has less parameters compared with traditional CNN. Provided the same amount of training samples, the modeled network can go deeper than traditional CNN and thus has superior generalization ability. Experimental results on several HSI datasets demonstrate that the proposed method has superior classification results compared with other state-of-the-art competing methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Cube-pair-based convolutional neural network (CNN) classfication architecture.</p>
Full article ">Figure 2
<p>Structure of the proposed DCPN.</p>
Full article ">Figure 3
<p>Classification maps of different methods on the Indiana Pine dataset.</p>
Full article ">Figure 4
<p>Classification maps of different methods on the PaviaU dataset.</p>
Full article ">Figure 4 Cont.
<p>Classification maps of different methods on the PaviaU dataset.</p>
Full article ">Figure 5
<p>Classification maps of different methods on the Salinas dataset.</p>
Full article ">Figure 6
<p>Colors represent different classes for three different datasets.</p>
Full article ">Figure 7
<p>Classification performance with different numbers of training samples on the Indiana Pines dataset.</p>
Full article ">Figure 8
<p>Classification performance with different numbers of training samples on the PaviaU dataset.</p>
Full article ">Figure 9
<p>Classification performance with different numbers of training samples on the Salinas dataset.</p>
Full article ">
20 pages, 1193 KiB  
Article
Natural Forest Mapping in the Andes (Peru): A Comparison of the Performance of Machine-Learning Algorithms
by Luis Alberto Vega Isuhuaylas, Yasumasa Hirata, Lenin Cruyff Ventura Santos and Noemi Serrudo Torobeo
Remote Sens. 2018, 10(5), 782; https://doi.org/10.3390/rs10050782 - 18 May 2018
Cited by 39 | Viewed by 7031
Abstract
The Andes mountain forests are sparse relict populations of tree species that grow in association with local native shrubland species. The identification of forest conditions for conservation in areas such as these is based on remote sensing techniques and classification methods. However, the [...] Read more.
The Andes mountain forests are sparse relict populations of tree species that grow in association with local native shrubland species. The identification of forest conditions for conservation in areas such as these is based on remote sensing techniques and classification methods. However, the classification of Andes mountain forests is difficult because of noise in the reflectance data within land cover classes. The noise is the result of variations in terrain illumination resulting from complex topography and the mixture of different land cover types occurring at the sub-pixel level. Considering these issues, the selection of an optimum classification method to obtain accurate results is very important to support conservation activities. We carried out comparative non-parametric statistical analyses on the performance of several classifiers produced by three supervised machine-learning algorithms: Random Forest (RF), Support Vector Machine (SVM), and k-Nearest Neighbor (kNN). The SVM and RF methods were not significantly different in their ability to separate Andes mountain forest and shrubland land cover classes, and their best classifiers showed a significantly better classification accuracy (AUC values 0.81 and 0.79 respectively) than the one produced by the kNN method (AUC value 0.75) because the latter was more sensitive to noisy training data. Full article
(This article belongs to the Special Issue Mountain Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in Peru. The red box outlines the image area.</p>
Full article ">Figure 2
<p>Preprocessing flowchart.</p>
Full article ">Figure 3
<p>Critical Difference (CD) diagram for the Nemenyi test showing the results of the statistical comparison of all models against each other by mean ranks based on AUC values (higher ranks, such as 5.9 for SVM:All, correspond to higher values of AUC). Classifiers that are not connected by a bold line of length equal to CD have significantly different mean ranks (Confidence level of 95%).</p>
Full article ">Figure 4
<p>Critical Difference (CD) diagram for the Nemenyi test showing the results of the statistical comparison of all models against each other by mean ranks based on Cohen’s Kappa values (higher ranks, such as 4.9 for SVM:All, correspond to higher values of Cohen’s Kappa). Classifiers that are not connected by a bold line of length equal to CD have significantly different mean ranks (Confidence level of 95%).</p>
Full article ">
26 pages, 28077 KiB  
Article
Region Merging Considering Within- and Between-Segment Heterogeneity: An Improved Hybrid Remote-Sensing Image Segmentation Method
by Yongji Wang, Qingyan Meng, Qingwen Qi, Jian Yang and Ying Liu
Remote Sens. 2018, 10(5), 781; https://doi.org/10.3390/rs10050781 - 18 May 2018
Cited by 33 | Viewed by 6200
Abstract
Image segmentation is an important process and a prerequisite for object-based image analysis, but segmenting an image into meaningful geo-objects is a challenging problem. Recently, some scholars have focused on hybrid methods that employ initial segmentation and subsequent region merging since hybrid methods [...] Read more.
Image segmentation is an important process and a prerequisite for object-based image analysis, but segmenting an image into meaningful geo-objects is a challenging problem. Recently, some scholars have focused on hybrid methods that employ initial segmentation and subsequent region merging since hybrid methods consider both boundary and spatial information. However, the existing merging criteria (MC) only consider the heterogeneity between adjacent segments to calculate the merging cost of adjacent segments, thus limiting the goodness-of-fit between segments and geo-objects because the homogeneity within segments and the heterogeneity between segments should be treated equally. To overcome this limitation, in this paper a hybrid remote-sensing image segmentation method is employed that considers the objective heterogeneity and relative homogeneity (OHRH) for MC during region merging. In this paper, the OHRH method is implemented in five different study areas and then compared to our region merging method using the objective heterogeneity (OH) method, as well as the full lambda-schedule algorithm (FLSA). The unsupervised evaluation indicated that the OHRH method was more accurate than the OH and FLSA methods, and the visual results showed that the OHRH method could distinguish both small and large geo-objects. The segments showed greater size changes than those of the other methods, demonstrating the superiority of considering within- and between-segment heterogeneity in the OHRH method. Full article
(This article belongs to the Special Issue Pattern Analysis and Recognition in Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Hybrid segmentation method for region merging considering within- and between-segment heterogeneity.</p>
Full article ">Figure 2
<p>The urban area located in Beijing and its fused GF-1 multispectral images with a spatial resolution of 2 m: (<b>a</b>) T1, factory area and (<b>b</b>) T2, urban residential area.</p>
Full article ">Figure 3
<p>The rural area located in Beijing and its GF-1 multispectral images with a spatial resolution of 8 m: (<b>a</b>) T3, river area; and (<b>b</b>) T4, farmland area.</p>
Full article ">Figure 4
<p>The forest area located in Beijing and its GF-1 multispectral images with a spatial resolution of 16 m: T5, forest and lake areas.</p>
Full article ">Figure 5
<p>Segmentation results of urban subsets, where the parameter α is set at 0.1, 0.4, 0.7 and 1 from left to right.</p>
Full article ">Figure 6
<p>The merging results of the river subset based on initial segments obtained for watersheds with different scales: panels (<b>a</b>–<b>c</b>) are the initial segments of watersheds with small, medium and large scales, respectively. Panels (<b>d</b>–<b>f</b>) are the subsequent OHRH merging results, where α is set at 0.5.</p>
Full article ">Figure 7
<p>The merging results of the farmland subset based on initial segments obtained for watersheds with different scales: panels (<b>a</b>–<b>c</b>) are the initial segments for watersheds with small, medium and large scales, respectively. Panels (<b>d</b>–<b>f</b>) are the subsequent OHRH merging results, where α is set at 0.5.</p>
Full article ">Figure 8
<p>The merging results of the forest subset corrupted by speckle noises with different variances: (<b>a</b>) no speckle noises; (<b>b</b>) speckle noises with a setting variance of 0.0000001; (<b>c</b>) speckle noises with a setting variance of 0.0000005; (<b>d</b>) speckle noises with a setting variance of 0.000001.</p>
Full article ">Figure 9
<p>The unsupervised segmentation evaluation (USE) for test images T1 and T2 produced by three methods, where the scale α is varied from 0.1 to 1.</p>
Full article ">Figure 10
<p>The segmentation results of test images T1 and T2 produced by three methods, using the optimal scale.</p>
Full article ">Figure 11
<p>Subsets of segmentations in <a href="#remotesensing-10-00781-f006" class="html-fig">Figure 6</a>, which are in the OHRH, objective heterogeneity (OH) and full lambda-schedule algorithm (FLSA) results from left to right.</p>
Full article ">Figure 12
<p>The USE for test images T3 and T4 produced by three methods, in which the scale α is varied from 0.1 to 1.</p>
Full article ">Figure 13
<p>The segmentation results of test images T3 and T4 produced by three methods, using the optimal scale.</p>
Full article ">Figure 14
<p>Subsets of segmentations in <a href="#remotesensing-10-00781-f009" class="html-fig">Figure 9</a>, which are in the OHRH, OH and FLSA results from left to right.</p>
Full article ">Figure 15
<p>The USE for test image T5 produced by three methods, in which the scale α varied from 0.1 to 1.</p>
Full article ">Figure 16
<p>The segmentation results of test image T5 produced by three methods, using the optimal scale.</p>
Full article ">Figure 17
<p>Subsets of segmentations in <a href="#remotesensing-10-00781-f012" class="html-fig">Figure 12</a>, which are in the OHRH, OH and FLSA results from left to right.</p>
Full article ">Figure 18
<p>Segmentation results produced with the OHRH and FLSA methods, using the SZTAKI-INRIA building detection dataset.</p>
Full article ">Figure 19
<p>Subsets of segmentations in <a href="#remotesensing-10-00781-f017" class="html-fig">Figure 17</a>, where the first and third columns are in the OHRH results, and the other columns are in the FLSA results.</p>
Full article ">Figure 20
<p>Standard deviations of segment sizes using the best OHRH, OH and FLSA segmentations for urban area, rural area and forest area, respectively.</p>
Full article ">
19 pages, 4935 KiB  
Article
Comparing Landsat and RADARSAT for Current and Historical Dynamic Flood Mapping
by Ian Olthof and Simon Tolszczuk-Leclerc
Remote Sens. 2018, 10(5), 780; https://doi.org/10.3390/rs10050780 - 18 May 2018
Cited by 21 | Viewed by 7659
Abstract
Mapping the historical occurrence of flood water in time and space provides information that can be used to help mitigate damage from future flood events. In Canada, flood mapping has been performed mainly from RADARSAT imagery in near real-time to enhance situational awareness [...] Read more.
Mapping the historical occurrence of flood water in time and space provides information that can be used to help mitigate damage from future flood events. In Canada, flood mapping has been performed mainly from RADARSAT imagery in near real-time to enhance situational awareness during an emergency, and more recently from Landsat to examine historical surface water dynamics from the mid-1980s to present. Here, we seek to integrate the two data sources for both operational and historical flood mapping. A main challenge of a multi-sensor approach is ensuring consistency between surface water mapped from sensors that fundamentally interact with the target differently, particularly in areas of flooded vegetation. In addition, automation of workflows that previously relied on manual interpretation is increasingly needed due to large data volumes contained within satellite image archives. Despite differences between data received from both sensors, common approaches to surface water and flooded vegetation mapping including multi-channel classification and region growing can be applied with sensor-specific adaptations for each. Historical open water maps from 202 Landsat scenes spanning the years 1985–2016 generated previously were enhanced to improve flooded vegetation mapping along the Saint John River in New Brunswick, Canada. Open water and flooded vegetation maps were created over the same region from 181 RADARSAT 1 and 2 scenes acquired between 2003–2016. Comparisons of maps from different sensors and hydrometric data were performed to examine consistency and robustness of products derived from different sensors. Simulations reveal that the methodology used to map open water from dual-pol RADARSAT 2 is insensitive to up to about 20% training error. Landsat depicts open water inundation well, while flooded vegetation can be reliably mapped in leaf-off conditions. RADARSAT mapped approximately 8% less open water area than Landsat and 0.5% more flooded vegetation, while the combined area of open water and flooded vegetation agreed to within 0.2% between sensors. Derived historical products depicting inundation frequency and trends were also generated from each sensor’s time-series of surface water maps and compared. Full article
(This article belongs to the Special Issue Remote Sensing for Flood Mapping and Monitoring of Flood Dynamics)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area along the Saint John River including the floodplain mask and hydrometric stations at Maugerville and Oak Point.</p>
Full article ">Figure 2
<p>RADARSAT 2 open water and flooded vegetation mapping methodology.</p>
Full article ">Figure 3
<p>Classification accuracy measured by Cohen’s kappa coefficient as a function of random, water omission and water commission training labelling error.</p>
Full article ">Figure 4
<p>Natural color QuickBird image (<b>left</b>) representing similar water levels to Landsat (<b>middle</b>) and RADARSAT 2 (<b>right</b>) open water and flooded vegetation maps, with example insets along the bottom. No-data representing cloud and cloud shadow are white in optical imagery.</p>
Full article ">Figure 5
<p>Coincident open water and flooded vegetation maps from Landsat and RADARSAT 2 of the main portion of the Saint John River floodplain east of Maugerville. No-data representing cloud and cloud shadow are white in optical imagery.</p>
Full article ">Figure 6
<p>Comparisons of open water (<b>left</b>) and combined open water and flooded vegetation (<b>middle</b>) between temporally and spatially coincident Landsat and RADARSAT maps, and derived flood ratios (<b>right</b>).</p>
Full article ">Figure 7
<p>Comparisons between normalized flood extents from RADARSAT, Landsat and combined and average daily hydrometric water depth at Maugerville and Oak Point on corresponding satellite acquisition dates.</p>
Full article ">Figure 8
<p>Open water (<b>top left</b>) and combined open water and flooded vegetation (<b>top right</b>) inundation frequency from Landsat for the 2003–2016 period, Pekel et al.’s 1984–2015 water occurrence product (<b>bottom left</b>) and 2003–2016 RADARSAT open water and flooded vegetation inundation (<b>bottom right</b>) over the main portion of the Saint John River floodplain east of Maugerville.</p>
Full article ">Figure 9
<p>2003–2016 RADARSAT inundation trends with significant decreasing inundation (drying) in red, and significant increasing inundation (wetting) in blue over the main portion of the Saint John River floodplain east of Maugerville.</p>
Full article ">
27 pages, 12203 KiB  
Article
DenseNet-Based Depth-Width Double Reinforced Deep Learning Neural Network for High-Resolution Remote Sensing Image Per-Pixel Classification
by Yiting Tao, Miaozhong Xu, Zhongyuan Lu and Yanfei Zhong
Remote Sens. 2018, 10(5), 779; https://doi.org/10.3390/rs10050779 - 18 May 2018
Cited by 48 | Viewed by 7630
Abstract
Deep neural networks (DNNs) face many problems in the very high resolution remote sensing (VHRRS) per-pixel classification field. Among the problems is the fact that as the depth of the network increases, gradient disappearance influences classification accuracy and the corresponding increasing number of [...] Read more.
Deep neural networks (DNNs) face many problems in the very high resolution remote sensing (VHRRS) per-pixel classification field. Among the problems is the fact that as the depth of the network increases, gradient disappearance influences classification accuracy and the corresponding increasing number of parameters to be learned increases the possibility of overfitting, especially when only a small amount of VHRRS labeled samples are acquired for training. Further, the hidden layers in DNNs are not transparent enough, which results in extracted features not being sufficiently discriminative and significant amounts of redundancy. This paper proposes a novel depth-width-reinforced DNN that solves these problems to produce better per-pixel classification results in VHRRS. In the proposed method, densely connected neural networks and internal classifiers are combined to build a deeper network and balance the network depth and performance. This strengthens the gradients, decreases negative effects from gradient disappearance as the network depth increases and enhances the transparency of hidden layers, making extracted features more discriminative and reducing the risk of overfitting. In addition, the proposed method uses multi-scale filters to create a wider neural network. The depth of the filters from each scale is controlled to decrease redundancy and the multi-scale filters enable utilization of joint spatio-spectral information and diverse local spatial structure simultaneously. Furthermore, the concept of network in network is applied to better fuse the deeper and wider designs, making the network operate more smoothly. The results of experiments conducted on BJ02, GF02, geoeye and quickbird satellite images verify the efficacy of the proposed method. The proposed method not only achieves competitive classification results but also proves that the network can continue to be robust and perform well even while the amount of labeled training samples is decreasing, which fits the small training samples situation faced by VHRRS per-pixel classification. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of convolution.</p>
Full article ">Figure 2
<p>Flowchart of a traditional convolutional neural network (CNN).</p>
Full article ">Figure 3
<p>Flowchart of the proposed “network in network” method.</p>
Full article ">Figure 4
<p>Operation of densely connected convolution.</p>
Full article ">Figure 5
<p>Dilated convolution.</p>
Full article ">Figure 6
<p>Structure of a subnetwork in the proposed network method.</p>
Full article ">Figure 7
<p>The ten experimental images: (<b>a</b>,<b>c</b>,<b>f</b>,<b>j</b>–<b>l</b>) are from the BJ02 satellite; (<b>b</b>,<b>d</b>,<b>e</b>,<b>g</b>,<b>m</b>–<b>o</b>) are from the GF02 satellite; (<b>h</b>,<b>i</b>) are from the geoeye and quickbird satellites, respectively.</p>
Full article ">Figure 7 Cont.
<p>The ten experimental images: (<b>a</b>,<b>c</b>,<b>f</b>,<b>j</b>–<b>l</b>) are from the BJ02 satellite; (<b>b</b>,<b>d</b>,<b>e</b>,<b>g</b>,<b>m</b>–<b>o</b>) are from the GF02 satellite; (<b>h</b>,<b>i</b>) are from the geoeye and quickbird satellites, respectively.</p>
Full article ">Figure 8
<p>(<b>a</b>,<b>b</b>) display the overall accuracy acquired from the BJ02 and GF02 images, respectively, based on different network structures.</p>
Full article ">Figure 9
<p>BJ02 classification results from (<b>a</b>) manually labeled reference data, (<b>b</b>) proposed method, (<b>c</b>) internal classifier-removed, (<b>d</b>) method using 1 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 1 kernel, (<b>e</b>) method using 3 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 3 kernel, (<b>f</b>) method using 5 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 5 kernel, (<b>g</b>) ResNet-based method using contextual deep CNN, (<b>h</b>) method without densely connected convolution, (<b>i</b>) improved URDNN, (<b>j</b>) SCAE + SVM, (<b>k</b>) two-stream network, (<b>l</b>) deconvolution, (<b>m</b>) parallelepiped, (<b>n</b>) minimum distance, (<b>o</b>) Mahalanobis distance and (<b>p</b>) maximum likelihood.</p>
Full article ">Figure 10
<p>GF02 classification results from (<b>a</b>) manually labeled reference data, (<b>b</b>) proposed method, (<b>c</b>) internal classifier-removed, (<b>d</b>) method using 1 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 1 kernel, (<b>e</b>) method using 3 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 3 kernel, (<b>f</b>) method using 5 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 5 kernel, (<b>g</b>) ResNet-based method using contextual deep CNN, (<b>h</b>) method without densely connected convolution, (<b>i</b>) improved URDNN, (<b>j</b>) SCAE + SVM, (<b>k</b>) two-stream network, (<b>l</b>) deconvolution, (<b>m</b>) parallelepiped, (<b>n</b>) minimum distance, (<b>o</b>) Mahalanobis distance and (<b>p</b>) maximum likelihood.</p>
Full article ">Figure 10 Cont.
<p>GF02 classification results from (<b>a</b>) manually labeled reference data, (<b>b</b>) proposed method, (<b>c</b>) internal classifier-removed, (<b>d</b>) method using 1 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 1 kernel, (<b>e</b>) method using 3 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 3 kernel, (<b>f</b>) method using 5 <math display="inline"><semantics> <mo>×</mo> </semantics></math> 5 kernel, (<b>g</b>) ResNet-based method using contextual deep CNN, (<b>h</b>) method without densely connected convolution, (<b>i</b>) improved URDNN, (<b>j</b>) SCAE + SVM, (<b>k</b>) two-stream network, (<b>l</b>) deconvolution, (<b>m</b>) parallelepiped, (<b>n</b>) minimum distance, (<b>o</b>) Mahalanobis distance and (<b>p</b>) maximum likelihood.</p>
Full article ">Figure 11
<p>(<b>a</b>,<b>b</b>) are changes of OA values for the BJ02 experiments and GF02 experiments, respectively, with the amount of training data decreasing.</p>
Full article ">Figure 12
<p>Classification images for extra eight images.</p>
Full article ">Figure 12 Cont.
<p>Classification images for extra eight images.</p>
Full article ">Figure 12 Cont.
<p>Classification images for extra eight images.</p>
Full article ">
16 pages, 5077 KiB  
Article
Assessing Texture Features to Classify Coastal Wetland Vegetation from High Spatial Resolution Imagery Using Completed Local Binary Patterns (CLBP)
by Minye Wang, Xianyun Fei, Yuanzhi Zhang, Zhou Chen, Xiaoxue Wang, Jin Yeu Tsou, Dawei Liu and Xia Lu
Remote Sens. 2018, 10(5), 778; https://doi.org/10.3390/rs10050778 - 17 May 2018
Cited by 37 | Viewed by 5869
Abstract
Coastal wetland vegetation is a vital component that plays an important role in environmental protection and the maintenance of the ecological balance. As such, the efficient classification of coastal wetland vegetation types is key to the preservation of wetlands. Based on its detailed [...] Read more.
Coastal wetland vegetation is a vital component that plays an important role in environmental protection and the maintenance of the ecological balance. As such, the efficient classification of coastal wetland vegetation types is key to the preservation of wetlands. Based on its detailed spatial information, high spatial resolution imagery constitutes an important tool for extracting suitable texture features for improving the accuracy of classification. In this paper, a texture feature, Completed Local Binary Patterns (CLBP), which is highly suitable for face recognition, is presented and applied to vegetation classification using high spatial resolution Pléiades satellite imagery in the central zone of Yancheng National Natural Reservation (YNNR) in Jiangsu, China. To demonstrate the potential of CLBP texture features, Grey Level Co-occurrence Matrix (GLCM) texture features were used to compare the classification. Using spectral data alone and spectral data combined with texture features, the image was classified using a Support Vector Machine (SVM) based on vegetation types. The results show that CLBP and GLCM texture features yielded an accuracy 6.50% higher than that gained when using only spectral information for vegetation classification. However, CLBP showed greater improvement in terms of classification accuracy than GLCM for Spartina alterniflora. Furthermore, for the CLBP features, CLBP_magnitude (CLBP_m) was more effective than CLBP_sign (CLBP_s), CLBP_center (CLBP_c), and CLBP_s/m or CLBP_s/m/c. These findings suggest that the CLBP approach offers potential for vegetation classification in high spatial resolution images. Full article
Show Figures

Figure 1

Figure 1
<p>The location of the study area.</p>
Full article ">Figure 2
<p>The vegetation types for classification and their image characteristics.</p>
Full article ">Figure 3
<p>An example of encoding process of <span class="html-italic">LBP</span> (<span class="html-italic">P</span> = 8, <span class="html-italic">R</span> = 1).</p>
Full article ">Figure 4
<p>An example of encoding process of <span class="html-italic">CLBP_s</span> and <span class="html-italic">CLBP_m</span> (<span class="html-italic">P</span> = 8, <span class="html-italic">R</span> = 1).</p>
Full article ">Figure 5
<p>Classification results: (<b>a</b>) Spectral textures; (<b>b</b>) Spectral and GLCM textures; (<b>c</b>) Spectral and CLBP_S textures; (<b>d</b>) Spectral and CLBP_M textures; (<b>e</b>) Spectral and CLBP_M/S textures; and (<b>f</b>) Spectral and CLBP_M/S/C textures.</p>
Full article ">
18 pages, 4651 KiB  
Article
Characterizing Tropical Forest Cover Loss Using Dense Sentinel-1 Data and Active Fire Alerts
by Johannes Reiche, Rob Verhoeven, Jan Verbesselt, Eliakim Hamunyela, Niels Wielaard and Martin Herold
Remote Sens. 2018, 10(5), 777; https://doi.org/10.3390/rs10050777 - 17 May 2018
Cited by 49 | Viewed by 16065
Abstract
Fire use for land management is widespread in natural tropical and plantation forests, causing major environmental and economic damage. Recent studies combining active fire alerts with annual forest-cover loss information identified fire-related forest-cover loss areas well, but do not provide detailed understanding on [...] Read more.
Fire use for land management is widespread in natural tropical and plantation forests, causing major environmental and economic damage. Recent studies combining active fire alerts with annual forest-cover loss information identified fire-related forest-cover loss areas well, but do not provide detailed understanding on how fires and forest-cover loss are temporally related. Here, we combine Sentinel-1-based, near real-time forest cover information with Visible Infrared Imaging Radiometer Suite (VIIRS) active fire alerts, and for the first time, characterize the temporal relationship between fires and tropical forest-cover loss at high temporal detail and medium spatial scale. We quantify fire-related forest-cover loss and separate fires that predate, coincide with, and postdate forest-cover loss. For the Province of Riau, Indonesia, dense Sentinel-1 C-band Synthetic Aperture Radar data with guaranteed observations of at least every 12 days allowed for confident and timely forest-cover-loss detection in natural and plantation forest with user’s and producer’s accuracy above 95%. Forest-cover loss was detected and confirmed within 22 days in natural forest and within 15 days in plantation forest. This difference can primarily be related to different change processes and dynamics in natural and plantation forest. For the period between 1 January 2016 and 30 June 2017, fire-related forest-cover loss accounted for about one third of the natural forest-cover loss, while in plantation forest, less than ten percent of the forest-cover loss was fire-related. We found clear spatial patterns of fires predating, coinciding with, or postdating forest-cover loss. Only the minority of fires in natural and plantation forest temporally coincided with forest-cover loss (13% and 16%) and can thus be confidently attributed as direct cause of forest-cover loss. The majority of the fires predated (64% and 58%) or postdated forest-cover loss (23% and 26%), and should be attributed to other key land management practices. Detailed and timely information on how fires and forest cover loss are temporally related can support tropical forest management, policy development, and law enforcement to reduce unsustainable and illegal fire use in the tropics. Full article
(This article belongs to the Section Forest Remote Sensing)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area, Province of Riau (Indonesia), and 2015 forest land cover [<a href="#B35-remotesensing-10-00777" class="html-bibr">35</a>]. Six natural forest classes and plantation forest are shown. Non-forest land cover is not shown.</p>
Full article ">Figure 2
<p>Single-pixel Sentinel-1 VH time series example illustrating the main steps of the method for near real-time forest cover loss detection. The pixel covers a tropical dry forest with a forest cover loss event in April 2017. (<b>A</b>) VH time series observations and the fitted first order harmonic model (blue line); (<b>B</b>) deseasonalized VH time series observations (VH<sub>D</sub>); (<b>C</b>) derived F (green) and NF Gaussian distributions (red); (<b>D</b>) detected forest cover loss. Time of flagged (red dotted line) and confirmed forest cover loss (red line) are shown.</p>
Full article ">Figure 3
<p>Sample validation pixel (<b>red outline</b>) showing forest-cover loss between 2017 (136) and 2017 (147). The VH backscatter image time series is shown in the top panels and the VH (<b>black</b>) and VV (<b>blue</b>) pixel time series for the validation pixel is shown in the bottom time series graph panel. The pixel time series shows the lower forest/non-forest difference for C-band VV when compared to VH.</p>
Full article ">Figure 4
<p>Spatial accuracy (OA, PA and UA of the forest cover loss class), and temporal accuracy (MTL = mean time lag of confirmed forest cover loss events; MTL<sub>F</sub> = mean time lag of the date at which confirmed forest cover loss events were first flagged) for VV and VH as a function of increasing χ threshold values, separately for natural forest (<b>A</b>) and plantation forest (<b>B</b>).</p>
Full article ">Figure 5
<p>Forest-cover loss in Riau, Indonesia detected for the period 1 January 2016–30 January 2017. Areas indicated by (<b>A</b>) and (<b>B</b>) are shown as detailed maps in <a href="#remotesensing-10-00777-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 6
<p>Detailed maps of forest-cover loss due to (<b>A</b>) encroachment including road construction into natural forest (0.87°N, 102.52°E); and (<b>B</b>) large-scale plantation dynamics (0.56°N, 102.12°E). High resolution satellite images from PlanetScope (3 m resolution, [<a href="#B46-remotesensing-10-00777" class="html-bibr">46</a>]). Example time series (right panel) indicating the dates at which forest-cover loss was flagged (red dotted line) and confirmed (red line).</p>
Full article ">Figure 7
<p>Fire-related (<b>red</b>) versus non-fire-related forest cover loss (<b>black</b>) in Riau, Indonesia for the period 1 January 2016–30 January 2017. Detail map (<b>right panel</b>) shows an area with partly fire-related forest-cover loss in both natural (<b>dark green</b>) and plantation forest (<b>light green</b>).</p>
Full article ">Figure 8
<p>Percentage of active fires predating, coinciding with and postdating forest-cover loss, separately for natural (<b>dark grey</b>) and plantation forest (<b>light grey</b>).</p>
Full article ">Figure 9
<p>Spatial pattern of fires predating (green), coinciding with (yellow), and postdating (blue) forest-cover loss in Riau, Indonesia. Areas indicated by (<b>A</b>); (<b>B</b>–<b>C</b>) are shown as detailed maps in <a href="#remotesensing-10-00777-f010" class="html-fig">Figure 10</a>.</p>
Full article ">Figure 10
<p>Three examples showing: (<b>A</b>) fire predating forest cover loss in natural forest (0.87°N, 102.52°E); (<b>B</b>) fire coinciding with forest cover loss in plantation forest (1.27°N, 102.14°E); (<b>C</b>) fire predating forest cover loss in natural forest (0.86°N, 102.5°E). High resolution satellite image from PlanetScope (3 m resolution, [<a href="#B46-remotesensing-10-00777" class="html-bibr">46</a>]) (left panel). Pixel time series examples (right panel) indicating the dates at which forest cover loss was flagged (red dotted line) and confirmed (red line), as well as the date of the active fire event (blue line).</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop