[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Advances of Multi-Temporal Remote Sensing in Vegetation and Agriculture Research

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (31 March 2020) | Viewed by 212989

Special Issue Editors


E-Mail Website
Guest Editor
Ottawa Research and Development Centre, Agriculture and Agri-Food Canada, 960 Carling Ave. Ottawa, ON K1A 0C6, Canada
Interests: remote sensing; crop and soil biophysical parameters estimation; crop productivity; agri-environmental sustainability assessment
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Agriculture and Agri-Food Canada, Ottawa, ON, Canada
Interests: earth observation; agro-climate data; soil moisture; drought monitoring; crop yield estimation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Geography, Nipissing University, North Bay, ON P1B 8L7, Canada
Interests: mangrove forests; wetlands; radar remote sensing systems; optical remote sensing systems; hyperspectral remote sensing systems; Mexican Pacific; West Africa

Special Issue Information

Dear Colleagues,

The dynamics of vegetated lands reflect the combined effects of human beings, climate, and the environment. Remote sensing is able to obtain spatial-temporal information of land surface conditions; thus, it is considered a powerful tool to study the driving forces of vegetation dynamics. In more recent years, with the rapid development of hardware, software, and data analysis algorithms, the breadth and depth of remote sensing-based research and applications have expanded tremendously. In particular, with the increased emphasis on climate change and its associated ecological impact, there has been a rising demand in using multi-temporal remote sensing to study vegetation dynamics, greenhouse gas emission, and agricultural production and adaptation under frequent extreme weather events. From a precision farming perspective, multi-temporal remote sensing has been routinely used for monitoring crop growth conditions and pest and disease invasions.

This Special Issue invites contributions showcasing multi-temporal remote sensing applications from vegetation (forest, grassland, and wetland) and agriculture from various platforms (satellite, aircraft, and UAV), sensors (optical, thermal, and radar), and scales (global, national and regional), spanning a wide range of topics including but not limited to the following areas:

  • Long-term vegetation monitoring;
  • Assimilation of multi-temporal/sensor data into process-based crop/ecological models;
  • Loss of agricultural land;
  • Wetland dynamic;
  • Deforestation and afforestation;
  • Crop rotation patterns and cropping practices
  • Impact of drought/flood on crop production and soil environment;
  • UAV sensing in support of precision agriculture

Dr. Jiali Shang
Dr. Jiangui Liu
Dr. Catherine Champagne
Dr. Taifeng Dong
Dr. John Kovacs
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (32 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

20 pages, 9264 KiB  
Article
Mapping Winter Wheat with Combinations of Temporally Aggregated Sentinel-2 and Landsat-8 Data in Shandong Province, China
by Feng Xu, Zhaofu Li, Shuyu Zhang, Naitao Huang, Zongyao Quan, Wenmin Zhang, Xiaojun Liu, Xiaosan Jiang, Jianjun Pan and Alexander V. Prishchepov
Remote Sens. 2020, 12(12), 2065; https://doi.org/10.3390/rs12122065 - 26 Jun 2020
Cited by 54 | Viewed by 4792
Abstract
Winter wheat is one of the major cereal crops in China. The spatial distribution of winter wheat planting areas is closely related to food security; however, mapping winter wheat with time-series finer spatial resolution satellite images across large areas is challenging. This paper [...] Read more.
Winter wheat is one of the major cereal crops in China. The spatial distribution of winter wheat planting areas is closely related to food security; however, mapping winter wheat with time-series finer spatial resolution satellite images across large areas is challenging. This paper explores the potential of combining temporally aggregated Landsat-8 OLI and Sentinel-2 MSI data available via the Google Earth Engine (GEE) platform for mapping winter wheat in Shandong Province, China. First, six phenological median composites of Landsat-8 OLI and Sentinel-2 MSI reflectance measures were generated by a temporal aggregation technique according to the winter wheat phenological calendar, which covered seedling, tillering, over-wintering, reviving, jointing-heading and maturing phases, respectively. Then, Random Forest (RF) classifier was used to classify multi-temporal composites but also mono-temporal winter wheat development phases and mono-sensor data. The results showed that winter wheat could be classified with an overall accuracy of 93.4% and F1 measure (the harmonic mean of producer’s and user’s accuracy) of 0.97 with temporally aggregated Landsat-8 and Sentinel-2 data were combined. As our results also revealed, it was always good to classify multi-temporal images compared to mono-temporal imagery (the overall accuracy dropped from 93.4% to as low as 76.4%). It was also good to classify Landsat-8 OLI and Sentinel-2 MSI imagery combined instead of classifying them individually. The analysis showed among the mono-temporal winter wheat development phases that the maturing phase’s and reviving phase’s data were more important than the data for other mono-temporal winter wheat development phases. In sum, this study confirmed the importance of using temporally aggregated Landsat-8 OLI and Sentinel-2 MSI data combined and identified key winter wheat development phases for accurate winter wheat classification. These results can be useful to benefit on freely available optical satellite data (Landsat-8 OLI and Sentinel-2 MSI) and prioritize key winter wheat development phases for accurate mapping winter wheat planting areas across China and elsewhere. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Shandong Province, is the study area. The location of training and validation samples is shown in the figure.</p>
Full article ">Figure 2
<p>The number of good-quality observations (after cloud mask) of (<b>a</b>) Sentinel-2, (<b>b</b>) Landsat-8, and (<b>c</b>) integrated Landsat-8 and Sentinel-2 images for per pixels acquired from October of 2018 to June 10th of 2019 in the study area; the lower-right histogram represents the distribution of the number of observations.</p>
Full article ">Figure 3
<p>Flow chart of the overall workflow.</p>
Full article ">Figure 4
<p>The mean and standard deviation of five classes, S, T, O, R, J and M represent the seedling, tillering, over-wintering, reviving, jointing-heading and maturing phases of winter wheat, respectively.</p>
Full article ">Figure 5
<p>Winter wheat map of Shandong Province in 2018–2019, which was generated by the combinations of temporally aggregated Landsat-8 and Sentinel-2 data with RF classifier. The red square, triangle and circle represent the location of the subsets in <a href="#remotesensing-12-02065-f006" class="html-fig">Figure 6</a>.</p>
Full article ">Figure 6
<p>Three subsets of winter wheat map. (<b>a</b>) Sentinel-2 false-color composite display (bands 4, 8 and 11) (the locations of three places are shown in <a href="#remotesensing-12-02065-f005" class="html-fig">Figure 5</a>), (<b>b</b>) winter wheat pixels derived from RF classifier.</p>
Full article ">Figure 7
<p>Overall Accuracy (OA) and Kappa Coefficient of mono-temporal images, S, T, O, R, J and M represent the seedling, tillering, over-wintering, reviving, jointing-heading and maturing phases of winter wheat, respectively.</p>
Full article ">Figure 8
<p>Overall Accuracy and Kappa of classification of a single sensor’s data with the random forest classifier.</p>
Full article ">Figure 9
<p>Importance of features in the RF classification with the multi-temporal data process; S, T, O, R, J and M represent the seedling, tillering, over-wintering, reviving, jointing-heading and maturing phases of winter wheat, respectively.</p>
Full article ">
18 pages, 7726 KiB  
Article
Detection of Crop Seeding and Harvest through Analysis of Time-Series Sentinel-1 Interferometric SAR Data
by Jiali Shang, Jiangui Liu, Valentin Poncos, Xiaoyuan Geng, Budong Qian, Qihao Chen, Taifeng Dong, Dan Macdonald, Tim Martin, John Kovacs and Dan Walters
Remote Sens. 2020, 12(10), 1551; https://doi.org/10.3390/rs12101551 - 13 May 2020
Cited by 52 | Viewed by 8595
Abstract
Synthetic aperture radar (SAR) is more sensitive to the dielectric properties and structure of the targets and less affected by weather conditions than optical sensors, making it more capable of detecting changes induced by management practices in agricultural fields. In this study, the [...] Read more.
Synthetic aperture radar (SAR) is more sensitive to the dielectric properties and structure of the targets and less affected by weather conditions than optical sensors, making it more capable of detecting changes induced by management practices in agricultural fields. In this study, the capability of C-band SAR data for detecting crop seeding and harvest events was explored. The study was conducted for the 2019 growing season in Temiskaming Shores, an agricultural area in Northern Ontario, Canada. Time-series SAR data acquired by Sentinel-1 constellation with the interferometric wide (IW) mode with dual polarizations in VV (vertical transmit and vertical receive) and VH (vertical transmit and horizontal receive) were obtained. interferometric SAR (InSAR) processing was conducted to derive coherence between each pair of SAR images acquired consecutively in time throughout the year. Crop seeding and harvest dates were determined by analyzing the time-series InSAR coherence and SAR backscattering. Variation of SAR backscattering coefficients, particularly the VH polarization, revealed seasonal crop growth patterns. The change in InSAR coherence can be linked to change of surface structure induced by seeding or harvest operations. Using a set of physically based rules, a simple algorithm was developed to determine crop seeding and harvest dates, with an accuracy of 85% (n = 67) for seeding-date identification and 56% (n = 77) for harvest-date identification. The extra challenge in harvest detection could be attributed to the impacts of weather conditions, such as rain and its effects on soil moisture and crop dielectric properties during the harvest season. Other factors such as post-harvest residue removal and field ploughing could also complicate the identification of harvest event. Overall, given its mechanism to acquire images with InSAR capability at 12-day revisiting cycle with a single satellite for most part of the Earth, the Sentinel-1 constellation provides a great data source for detecting crop field management activities through coherent or incoherent change detection techniques. It is anticipated that this method could perform even better at a shorter six-day revisiting cycle with both satellites for Sentinel-1. With the successful launch (2019) of the Canadian RADARSAT Constellation Mission (RCM) with its tri-satellite system and four polarizations, we are likely to see improved system reliability and monitoring efficiency. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in Temiskaming Shores, northern Ontario, overlaid with observed crop fields and forest. Background is a Landsat image acquired on August 24, 2019.</p>
Full article ">Figure 2
<p>Meteorological condition of the study area in 2019, obtained from the Earlton Climate Station (ID 6072230), Ontario, Canada: (<b>a</b>) daily precipitation; (<b>b</b>) mean daily temperature and snow depth</p>
Full article ">Figure 3
<p>Typical seasonal variation of InSAR coherence for the study area during May 6–December 1 2019: (<b>a</b>) crop fields and (<b>b</b>) forest.</p>
Full article ">Figure 4
<p>Typical seasonal variation of VV backscattering coefficients for the study area during May 6–December 1 2019, measured by Sentinel-1 C-band SAR. (<b>a</b>) crop fields and (<b>b</b>) forest.</p>
Full article ">Figure 5
<p>Typical seasonal variation of VH backscattering coefficients for the study area during May 6–December 1 2019, measured by Sentinel-1 C-band SAR. (<b>a</b>) crop fields and (<b>b</b>) forest.</p>
Full article ">Figure 6
<p>Time-series SAR data and seeding/harvest dates for example fields of (<b>a</b>) barley, (<b>b</b>) canola, (<b>c</b>) soybean and (<b>d</b>) oats. Vertical blue lines mark the observed seeding date (dashed) and harvest date (solid).</p>
Full article ">Figure 7
<p>Pre-harvest (<b>a–b</b>) and post-harvest (<b>c–f</b>) photos for the four example fields shown in Error! Reference source not found. Note: no photo was taken in the soybean and canola fields before harvest.</p>
Full article ">Figure 8
<p>Mapping of change through InSAR coherence at the start May 6–May 18–May 30 (<b>a,b</b>) and end lower, September 15, September 27, and October 9 (<b>c,d</b>) of the season. Maps to the left are generated using two consecutive InSAR coherence images and one blank image as RGB channels (<b>a,c</b>), and maps to the right are made using the three VH backscattering intensity images sequentially as RGB channels (<b>b,d</b>).</p>
Full article ">Figure 9
<p>Maps of crop seeding (<b>a</b>) and harvest dates (<b>b</b>) derived using time-series InSAR coherence and VH backscattering coefficient. Proportion of pixels identified as seeded or harvested for each time.</p>
Full article ">
25 pages, 12342 KiB  
Article
Evaluation and Comparison of Light Use Efficiency and Gross Primary Productivity Using Three Different Approaches
by Mengjia Wang, Rui Sun, Anran Zhu and Zhiqiang Xiao
Remote Sens. 2020, 12(6), 1003; https://doi.org/10.3390/rs12061003 - 20 Mar 2020
Cited by 35 | Viewed by 5922
Abstract
Light use efficiency (LUE), which characterizes the efficiency with which vegetation converts captured/absorbed radiation into organic dry matter through photosynthesis, is a key parameter for estimating vegetation gross primary productivity (GPP). Studies suggest that diffuse radiation induces a higher LUE than direct radiation [...] Read more.
Light use efficiency (LUE), which characterizes the efficiency with which vegetation converts captured/absorbed radiation into organic dry matter through photosynthesis, is a key parameter for estimating vegetation gross primary productivity (GPP). Studies suggest that diffuse radiation induces a higher LUE than direct radiation in short-term and site-scale experiments. The clearness index (CI), described as the fraction of solar incident radiation on the surface of the earth to the extraterrestrial radiation at the top of the atmosphere, is added to the parameterization approach to explain the conditions of diffuse and direct radiation in this study. Machine learning methods—such as the Cubist regression tree approach—are also popular approaches for studying vegetation carbon uptake. This paper aims to compare and analyze the performances of three different approaches for estimating global LUE and GPP. The methods for collecting LUE were based on the following: (1) parameterization approach without CI; (2) parameterization approach with CI; and (3) Cubist regression tree approach. We collected GPP and meteorological data from 180 FLUXNET sites as calibration and validation data and the Global Land Surface Satellite (GLASS) products and ERA-interim data as input data to estimate the global LUE and GPP in 2014. Site-scale validation with FLUXNET measurements indicated that the Cubist regression approach performed better than the parameterization approaches. However, when applying the approaches to global LUE and GPP, the parameterization approach with the CI became the most reliable approach, then closely followed by the parameterization approach without the CI. Spatial analysis showed that the addition of the CI improved the LUE and GPP, especially in high-value zones. The results of the Cubist regression tree approach illustrate more fluctuations than the parameterization approaches. Although the distributions of LUE presented variations over different seasons, vegetation had the highest LUE, at approximately 1.5 gC/MJ, during the whole year in equatorial regions (e.g., South America, middle Africa and Southeast Asia). The three approaches produced roughly consistent global annual GPPs ranging from 109.23 to 120.65 Pg/yr. Our results suggest the parameterization approaches are robust when extrapolating to the global scale, of which the parameterization approach with CI performs slightly better than that without CI. By contrast, the Cubist regression tree produced LUE and GPP with lower accuracy even though it performed the best for model validation at the site scale. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The flow chart of global Light use efficiency (LUE) and gross primary productivity (GPP) by 3 approaches.</p>
Full article ">Figure 2
<p>The validation results of 3 approaches. Approach V1: Parameterization approach without the clearness index (CI); Approach V2: Parameterization approach with the CI; Approach V3: Cubist regression tree approach; MODIS: MODIS 17 algorithm.</p>
Full article ">Figure 3
<p>The accuracy of 3 approaches in each type of vegetation. V1: Parameterization approach without the CI; V2: Parameterization approach with the CI; V3: Cubist regression tree approach.</p>
Full article ">Figure 4
<p>LUE and GPP validation. V1: Parameterization approach without the CI; V2: Parameterization approach with the CI; V3: Cubist regression tree approach.</p>
Full article ">Figure 5
<p>The distributions of LUE by parameterization approach without the CI (V1).</p>
Full article ">Figure 6
<p>The distributions of LUE by the parameterization approach with the CI (V2).</p>
Full article ">Figure 7
<p>The distributions of LUE by the Cubist regression tree approach (V3).</p>
Full article ">Figure 8
<p>The distribution of Annual GPP (gC/m<sup>2</sup>/yr) produced by parameterization approach without the CI (<b>left upper</b>), parameterization approach with the CI (<b>right upper</b>), Cubist regression tree approach (<b>left middle</b>), and MOD17 GPP in 2014 (<b>right middle</b>), the difference between V2 and MOD17 GPP (<b>left lower</b>), and the histogram of global annual GPP in 2014 by four algorithms (<b>right lower</b>).</p>
Full article ">Figure 9
<p>LUE and GPP curves in IT-Isp, AU-Whr, CH-Cha and SE-St1.</p>
Full article ">Figure 10
<p>Global GPP in each type of vegetation. V1: Parameterization approach without the CI; V2: Parameterization approach with the CI; V3: Cubist regression tree approach.</p>
Full article ">Figure 11
<p>LUE and GPP curves along latitude. V1: Parameterization approach without the CI; V2: Parameterization approach with the CI; V3: Cubist regression tree approach.</p>
Full article ">Figure 12
<p>The curves of GPP (<b>a</b>), LUE (<b>b</b>), EF (<b>c</b>), PAR (<b>d</b>), Tmean (<b>e</b>) in CH-Lae (1) and US-SRG (2).</p>
Full article ">Figure 13
<p>Left: LUE and GPP curves and land surface images in homogeneous sites. US-WCr, US-Me2 and AU-Gin); right: LUE and GPP curves and land surface images in heterogeneous sites (DE-Kli and CH-Oe2) (red rectangles cover 500 m × 500 m land, yellow rectangles cover 5000 m × 5000 m land.</p>
Full article ">Figure 14
<p>LUE and GPP curves in US-Los and US-Tw4.</p>
Full article ">
22 pages, 4260 KiB  
Article
Identifying the Contributions of Multi-Source Data for Winter Wheat Yield Prediction in China
by Juan Cao, Zhao Zhang, Fulu Tao, Liangliang Zhang, Yuchuan Luo, Jichong Han and Ziyue Li
Remote Sens. 2020, 12(5), 750; https://doi.org/10.3390/rs12050750 - 25 Feb 2020
Cited by 86 | Viewed by 6061
Abstract
Wheat is a leading cereal grain throughout the world. Timely and reliable wheat yield prediction at a large scale is essential for the agricultural supply chain and global food security, especially in China as an important wheat producing and consuming country. The conventional [...] Read more.
Wheat is a leading cereal grain throughout the world. Timely and reliable wheat yield prediction at a large scale is essential for the agricultural supply chain and global food security, especially in China as an important wheat producing and consuming country. The conventional approach using either climate or satellite data or both to build empirical and crop models has prevailed for decades. However, to what extent climate and satellite data can improve yield prediction is still unknown. In addition, socio-economic (SC) factors may also improve crop yield prediction, but their contributions need in-depth investigation, especially in regions with good irrigation conditions, sufficient fertilization, and pesticide application. Here, we performed the first attempt to predict wheat yield across China from 2001 to 2015 at the county-level by integrating multi-source data, including monthly climate data, satellite data (i.e., Vegetation indices (VIs)), and SC factors. The results show that incorporating all the datasets by using three machine learning methods (Ridge Regression (RR), Random Forest (RF), and Light Gradient Boosting (LightGBM)) can achieve the best performance in yield prediction (R2: 0.68~0.75), with the most individual contributions from climate (~0.53), followed by VIs (~0.45), and SC factors (~0.30). In addition, the combinations of VIs and climate data can capture inter-annual yield variability more effectively than other combinations (e.g., combinations of climate and SC, and combinations of VIs and SC), while combining SC with climate data can better capture spatial yield variability than others. Climate data can provide extra and unique information across the entire growing season, while the peak stage of VIs (Mar.~Apr.) do so. Furthermore, incorporating spatial information and soil proprieties into the benchmark models can improve wheat yield prediction by 0.06 and 0.12, respectively. The optimal wheat prediction can be achieved with approximately a two-month leading time before maturity. Our study develops timely and robust methods for winter wheat yield prediction at a large scale in China, which can be applied to other crops and regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of study area. (<b>a</b>) The main winter wheat producing provinces (black line) and counties (orange line), which includes ~76% of the total wheat yield in China; the green shading refers to winter wheat cropping areas. (<b>b</b>) The scope of the China and provincial boundaries (gray line).</p>
Full article ">Figure 2
<p>A typical example to illustrate the differences in spatial and temporal patterns of data from Options 1–3. Option 1 uses raw county-level data (the red dashed line shows its linear fit) (<b>a</b>); Option 2 removes the linear trend (the red line in Option 1) from the raw data (<b>b</b>), and Option 3 adds the multi-year average of the raw county-level data into the Option 2 data (<b>c</b>); (<b>e</b>)–(<b>g</b>) express the spatial patterns of the three types of yields corresponding to each Option.</p>
Full article ">Figure 3
<p>Fifteen-year-averaged (2001–2015) spatial patterns after normalization of crop yield (<b>a</b>), the satellite-based variables (<b>b</b>–<b>d</b>), climate variables (<b>e</b>–<b>i</b>), socio-economic factors (<b>j</b>–<b>n</b>) and for all counties in the study region. Note: all of the data have a mean of zero and standard deviation of one; b–i are based on March of every year.</p>
Full article ">Figure 4
<p>The model performances (predicted R2) of the three methods separated by the three Options with different inputs for the entire growing season. (<b>a</b>) “Raw”; (<b>b</b>) “Detrend”; and (<b>c</b>) “Detrend+Mean”. The blue color is for RR, green for RF and red for LightGBM. The error bars are one standard deviation of predicted R<sup>2</sup> from 100 ensembles by randomly dividing training and testing datasets.</p>
Full article ">Figure 5
<p>(<b>a</b>) The model performance (predicted R<sup>2</sup>) using VIs during the whole growing season and climate data during a specific stage, either for Early (Oct. and Feb.), or Peak (Mar. and Apr.), or Late stage (Jun. and Jul.). The dashed line in (<b>a</b>) represents the benchmark model performance by only using VIs. (<b>b</b>) The model performance (predicted R<sup>2</sup>) using the climate data during the whole growing season and VIs during a specific stage (the same stage in (a)). The dashed line in (<b>b</b>) represents the benchmark model performance by only using climate data. The error bars are one standard deviation of predicted R<sup>2</sup> from 100 ensembles by randomly dividing training and testing datasets.</p>
Full article ">Figure 6
<p>The temporal progression of the model performance based on the three methods (RR, RF, and LightGBM). The left panel shows the temporal progress of model performance according to each month (i.e., the prediction at any specific month contains input data covering the period from the beginning of the growing season to that specific month, thus the later period contains more inputs and usually has a higher performance). Blue refers to the model performance of using input sources including climate data and VIs; orange for VIs only inputted; and gray for climate data only. The right panel shows the differences of model performance between combined input sources and VIs only (a blue column indicates the benefits from VIs, calculated by subtracting the orange line from the blue line in the corresponding left panel); and the differences between VI + climate and climate only (a gray column indicates the benefits from VIs, calculated by subtracting the gray line from the blue line in the corresponding panel).</p>
Full article ">Figure 7
<p>The model performance (predicted R<sup>2</sup>) after including spatial information and soil properties; the green and red columns mean the R<sup>2</sup> after including the soil properties and spatial information in the benchmark model (blue color), respectively. The error bars are one standard deviation of predicted R<sup>2</sup> from 100 ensembles obtained by randomly dividing training and testing datasets.</p>
Full article ">Figure 8
<p>The results of the “leave-one-year-out” experiment across different years. One-year data are selected for testing, while data from other years for training. The worst performance in 2002 and 2007 may be due to extreme events (black for RR, red for RF, and green for LightGBM model).</p>
Full article ">Figure A1
<p>The area of each statistical province from 2001 to 2015.</p>
Full article ">Figure A2
<p>The delta value of prediction R2 (Option 3-Option 2)</p>
Full article ">
24 pages, 6418 KiB  
Article
Analysis of the Spatiotemporal Change in Land Surface Temperature for a Long-Term Sequence in Africa (2003–2017)
by Nusseiba NourEldeen, Kebiao Mao, Zijin Yuan, Xinyi Shen, Tongren Xu and Zhihao Qin
Remote Sens. 2020, 12(3), 488; https://doi.org/10.3390/rs12030488 - 3 Feb 2020
Cited by 44 | Viewed by 6056
Abstract
It is very important to understand the temporal and spatial variations of land surface temperature (LST) in Africa to determine the effects of temperature on agricultural production. Although thermal infrared remote sensing technology can quickly obtain surface temperature information, it is greatly affected [...] Read more.
It is very important to understand the temporal and spatial variations of land surface temperature (LST) in Africa to determine the effects of temperature on agricultural production. Although thermal infrared remote sensing technology can quickly obtain surface temperature information, it is greatly affected by clouds and rainfall. To obtain a complete and continuous dataset on the spatiotemporal variations in LST in Africa, a reconstruction model based on the moderate resolution imaging spectroradiometer (MODIS) LST time series and ground station data was built to refactor the LST dataset (2003–2017). The first step in the reconstruction model is to filter low-quality LST pixels contaminated by clouds and then fill the pixels using observation data from ground weather stations. Then, the missing pixels are interpolated using the inverse distance weighting (IDW) method. The evaluation shows that the accuracy between reconstructed LST and ground station data is high (root mean square er–ror (RMSE) = 0.84 °C, mean absolute error (MAE) = 0.75 °C and correlation coefficient (R) = 0.91). The spatiotemporal analysis of the LST indicates that the change in the annual average LST from 2003–2017 was weak and the warming trend in Africa was remarkably uneven. Geographically, “the warming is more pronounced in the north and the west than in the south and the east”. The most significant warming occurred near the equatorial region in South Africa (slope > 0.05, R > 0.61, p < 0.05) and the central (slope = 0.08, R = 0.89, p < 0.05) regions, and a nonsignificant decreasing trend occurred in Botswana. Additionally, the mid-north region (north of Chad, north of Niger and south of Algeria) became colder (slope > −0.07, R = 0.9, p < 0.05), with a nonsignificant trend. Seasonally, significant warming was more pronounced in winter, mostly in the west, especially in Mauritania (slope > 0.09, R > 0.9, p < 0.5). The response of the different types of surface to the surface temperature has shown variability at different times, which provides important information to understand the effects of temperature changes on crop yields, which is critical for the planning of agricultural farming systems in Africa. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area is divided into six subregions (I, II, III, IV, V, VI), and the red stars represent the meteorological sites used for validation in six critical countries (<b>a</b>: Sudan, <b>b</b>: Algeria, <b>c</b>: Mauritania <b>d</b>: Ethiopia, <b>e</b>: Congo, and <b>f</b>: South Africa).</p>
Full article ">Figure 2
<p>Flowchart showing a summary of the method used to fill the gaps in the MODIS LST data.</p>
Full article ">Figure 3
<p>Spatial distribution of reconstructed missing values for February 2017 and August 2017 from the Terra and Aqua satellites: (<b>a</b>,<b>c</b>) original LST data; (<b>b</b>,<b>d</b>) data after interpolation (areas of invalid data are shown in white).</p>
Full article ">Figure 4
<p>Spatial distributions of annual LST in Africa: (<b>a</b>) mean LST, (<b>b</b>) Pettitt test, (<b>c</b>) slope, (<b>d</b>) correlation coefficient from 2003 to 2017.</p>
Full article ">Figure 5
<p>Spatial patterns of interannual LST changes (slope) (<b>a</b>) during the day and (<b>b</b>) at night, and correlation coefficients for (<b>c</b>) day and (<b>d</b>) night.</p>
Full article ">Figure 6
<p>Spatiotemporal patterns in the average LSTs over Africa at (<b>a</b>) daytime, (<b>b</b>) nighttime, (<b>c</b>) and the difference between day and night LSTs.</p>
Full article ">Figure 7
<p>Seasonal LST (°C): (<b>a</b>) mean, (<b>b</b>) slope, and (<b>c</b>) R from 2003 to 2017.</p>
Full article ">Figure 8
<p>The monthly variations (<b>slope</b>) and correlation coefficients (<b>R</b>) of LST from 2003 to 2017.</p>
Full article ">Figure 9
<p>Comparison of the monthly data from ground stations in different countries and the monthly MODIS LST data ((<b>a</b>) Sudan, (<b>b</b>) Algeria, (<b>c</b>) Mauritania, (<b>d</b>) Ethiopia, (<b>e</b>) Congo, and (<b>f</b>) South Africa).</p>
Full article ">
22 pages, 6959 KiB  
Article
Prediction of Winter Wheat Yield Based on Multi-Source Data and Machine Learning in China
by Jichong Han, Zhao Zhang, Juan Cao, Yuchuan Luo, Liangliang Zhang, Ziyue Li and Jing Zhang
Remote Sens. 2020, 12(2), 236; https://doi.org/10.3390/rs12020236 - 9 Jan 2020
Cited by 220 | Viewed by 15590
Abstract
Wheat is one of the main crops in China, and crop yield prediction is important for regional trade and national food security. There are increasing concerns with respect to how to integrate multi-source data and employ machine learning techniques to establish a simple, [...] Read more.
Wheat is one of the main crops in China, and crop yield prediction is important for regional trade and national food security. There are increasing concerns with respect to how to integrate multi-source data and employ machine learning techniques to establish a simple, timely, and accurate crop yield prediction model at an administrative unit. Many previous studies were mainly focused on the whole crop growth period through expensive manual surveys, remote sensing, or climate data. However, the effect of selecting different time window on yield prediction was still unknown. Thus, we separated the whole growth period into four time windows and assessed their corresponding predictive ability by taking the major winter wheat production regions of China as an example in the study. Firstly we developed a modeling framework to integrate climate data, remote sensing data and soil data to predict winter wheat yield based on the Google Earth Engine (GEE) platform. The results show that the models can accurately predict yield 1~2 months before the harvesting dates at the county level in China with an R2 > 0.75 and yield error less than 10%. Support vector machine (SVM), Gaussian process regression (GPR), and random forest (RF) represent the top three best methods for predicting yields among the eight typical machine learning models tested in this study. In addition, we also found that different agricultural zones and temporal training settings affect prediction accuracy. The three models perform better as more winter wheat growing season information becomes available. Our findings highlight a potentially powerful tool to predict yield using multiple-source data and machine learning in other regions and for crops. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and China’s agricultural divisions (Zone I: Northern arid and Semiarid Region; Zone II: Loess Plateau; Zone III: Huang-Huai-Hai Plain; Zone IV: Sichuan Basin and Surrounding Regions; Zone V: Middle-lower Yangtze Plain; Zone VI: Yunnan-Guizhou Plateau and Southern China).</p>
Full article ">Figure 2
<p>The distribution of NDVI (<b>a</b>), EVI (<b>b</b>), TMAX (<b>c</b>), TMIN (<b>d</b>), DI (<b>e</b>), PRE (<b>f</b>), SM (<b>g</b>) and the physical and chemical properties of the soil (<b>h</b>) for available data (2001–2014) for the monthly period. The distribution of soil properties for available data for 629 counties in China. 1: January; 2: February; 3: March; 4: April; 5: May; 10: October; 11: November; 12: December. NDVI: normalized vegetation index; EVI: enhanced vegetation index; TMAX: monthly maximum temperature; TMIN: monthly minimum temperature; DI: palmer drought severity index; PRE: monthly precipitation accumulation; SM: soil moisture; SILT: silt content; GRAVEL: volume percentage of crushed stone; OC: organic carbon content; REF_BULK: soil bulk density; PH_H2O: hydrogen ion concentration; SAND: sand content; CLAY: clay content; T and S represent the topsoil layer (0–30 cm) and the subsoil layer (30–100 cm), respectively.</p>
Full article ">Figure 3
<p>R2 (<b>a</b>), RMSE (<b>b</b>) and MAE (<b>c</b>) skill scores of eight models for winter wheat in different growth periods at the county scale based on five-fold cross-validation results in different time windows. (The unit of RMSE and MAE is kg/ha; 10-4: from October to April; 10-5: from October to May; 11-4: from November to April; 11-5: from November to May.). The error bar represents percentage error, instead of standard error. The top of the line stands for the value plus 15%, and the bottom of the line stands for the value minus 15%.</p>
Full article ">Figure 4
<p>Scatter plots of observed yield and predicted yield of GPR, SVM and RF models for different growth periods of winter wheat at the county scale. One point represents a county (the alphabets (<b>a</b>), (<b>b</b>) and (<b>c</b>) represent GPR, SVM and RF; numbers 1~4 represent October~April, October~May, November~April, and November~May, respectively).</p>
Full article ">Figure 5
<p>RMSE (<b>a</b>) and MAE (<b>b</b>) skill scores of the best performing prediction models for winter wheat in different growth periods at the county scale. (SVM: Support vector machine; GPR: Gaussian process regression; RF: Random forest, and 10-4: from October to April; 10-5: from October to May; 11-4: from November to April; 11-5: from November to May). Error bar represent standard error.</p>
Full article ">Figure 6
<p>Spatial distribution of observed and predicted yields of winter wheat in 2014. (<b>a</b>) yields observed at county scale; yields predicted by SVM (<b>b</b>), GPR (<b>c</b>), and RF (<b>d</b>) at grid-scale. The wheat growth period selected by prediction is from October to next May.</p>
Full article ">Figure 7
<p>Percentage errors of winter wheat in different agricultural zones. The wheat growth period selected by prediction is from October to next May. Percentage error = (predicted yield ‒ observed yield)/observed yield * 100. Error bar represents one standard error.</p>
Full article ">Figure 8
<p>Predictor variable importance based on the RF model, from October to May next year.</p>
Full article ">Figure A1
<p>Yield residual distribution predicted by the combination of different models and growth periods. For (<b>a</b>), the combination is SVM 10-5, SVM 11-5 (<b>b</b>), SVM 10-4 (<b>c</b>), SVM 11-4 (<b>d</b>), RF 10-5 (<b>e</b>), RF 11-5 (<b>f</b>), RF 10-4 (<b>g</b>), RF 11-4 (<b>h</b>), GPR 10-5 (<b>i</b>), GPR 11-5 (<b>j</b>), GPR 10-4 (<b>k</b>), and GPR 11-4 (<b>l</b>). The residual distribution of all models had passed Kolmogorov-Smirnov test (Sig. &gt; 0.05), and the distribution of residual conforms to normal distribution. (10-4: from October to April; 10-5: from October to May; 11-4: from November to April; 11-5: from November to May.).</p>
Full article ">
17 pages, 14211 KiB  
Article
Optimal Temporal Window Selection for Winter Wheat and Rapeseed Mapping with Sentinel-2 Images: A Case Study of Zhongxiang in China
by Shiyao Meng, Yanfei Zhong, Chang Luo, Xin Hu, Xinyu Wang and Shengxiang Huang
Remote Sens. 2020, 12(2), 226; https://doi.org/10.3390/rs12020226 - 9 Jan 2020
Cited by 48 | Viewed by 4759
Abstract
Currently, the main remote sensing-based crop mapping methods are based on spectral-temporal features. However, there has been a lack research on the selection of the multi-temporal images, and most of the methods are based on the use of all the available images during [...] Read more.
Currently, the main remote sensing-based crop mapping methods are based on spectral-temporal features. However, there has been a lack research on the selection of the multi-temporal images, and most of the methods are based on the use of all the available images during the cycle of crop growth. In this study, in order to explore the optimal temporal window for crop mapping with limited remote sensing data, we tested all possible combinations of temporal windows in an exhaustive manner, and made a comprehensive consideration of the spatial accuracy and statistical accuracy as evaluation indices. We collected all the available cloud-free Sentinel-2 multi-spectral images for the winter wheat and rapeseed growth periods in the study area in southern China, and used the random forest (RF) method as the classifier to identify the optimal temporal window. The spatial and statistical accuracies of all the results were assessed by using ground survey data and local agricultural census data. The optimal temporal window for the mapping of winter wheat and rapeseed in the study area was obtained by identifying the best-performing set of results. In addition, the variable importance (VI) index was used to evaluate the importance of the different bands for crop mapping. The results of the spatial accuracy, statistical accuracy, and the VI showed that the combinations of images from the later stages of crop growth were more suitable for crop mapping. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Zhongxiang City satellite image from April 8, 2018, using a true-color composite of the blue, green, and red bands of Sentinel-2.</p>
Full article ">Figure 2
<p>The ground survey polygon locations in the study area. The red polygons represent the winter wheat planting areas, and the green polygons represent the rapeseed planting areas. Images p1 and p2 are the Google Earth satellite images corresponding to the locations of the ground survey samples. The in-situ photographs taken during the ground survey in January 2018 are shown in (<b>a1</b>) and (<b>a2</b>) (rapeseed), and (<b>b1</b>) and (<b>b2</b>) (winter wheat).</p>
Full article ">Figure 3
<p>The available cloud-free image data during the entire growth cycle of winter crops in Zhongxiang City. The images are false-color composites of the green, red, and near-infrared (NIR) bands.</p>
Full article ">Figure 4
<p>The workflow of optimal temporal window selection.</p>
Full article ">Figure 5
<p>Zhongxiang City winter wheat and rapeseed mapping results with different combinations of Sentinel-2 images. From left to right, top to bottom, are the results of the weighted residual error (<math display="inline"><semantics> <mrow> <mi>WRes</mi> </mrow> </semantics></math>) rankings 1 to 20.</p>
Full article ">Figure 5 Cont.
<p>Zhongxiang City winter wheat and rapeseed mapping results with different combinations of Sentinel-2 images. From left to right, top to bottom, are the results of the weighted residual error (<math display="inline"><semantics> <mrow> <mi>WRes</mi> </mrow> </semantics></math>) rankings 1 to 20.</p>
Full article ">Figure 6
<p>The winter wheat and rapeseed mapping details of the best-performing ranking result: (<b>a</b>) and (<b>b</b>) are the true-color image and the result map, as are (<b>c</b>,<b>d</b>). The true-color image was acquired on the 8 April, 2018. The green and red blocks represent the rapeseed and winter wheat, respectively.</p>
Full article ">Figure 7
<p>The assessment indicator results for all the temporal combinations. (<b>a</b>) OA; (<b>b</b>) Kappa; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mrow> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> </mrow> </mrow> <mi>r</mi> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mrow> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> </mrow> </mrow> <mi>w</mi> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>WRes</mi> </mrow> </semantics></math>. OA, overall accuracy; Kappa, Kappa coefficient.</p>
Full article ">Figure 7 Cont.
<p>The assessment indicator results for all the temporal combinations. (<b>a</b>) OA; (<b>b</b>) Kappa; (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mrow> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> </mrow> </mrow> <mi>r</mi> </msub> </mrow> </semantics></math>; (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mrow> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> </mrow> </mrow> <mi>w</mi> </msub> </mrow> </semantics></math>; (<b>e</b>) <math display="inline"><semantics> <mrow> <mi>WRes</mi> </mrow> </semantics></math>. OA, overall accuracy; Kappa, Kappa coefficient.</p>
Full article ">Figure 8
<p>(<b>a</b>) Average <math display="inline"><semantics> <mrow> <msub> <mrow> <mrow> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> </mrow> </mrow> <mi>w</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">e</mi> <mi>s</mi> </mrow> <mi>r</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <mrow> <mi>WRes</mi> </mrow> </mrow> </semantics></math> for all the combinations of temporal images; (<b>b</b>) average OA and Kappa for each temporal combination; (<b>c</b>) the best performance in <math display="inline"><semantics> <mrow> <mi>WRes</mi> </mrow> </semantics></math> for each temporal combination.</p>
Full article ">Figure 9
<p>VI derived from the RF classifier, with the MDA method and OOB data. The sequential sorting of the X-axis represents the spectral bands of each temporal Sentinel-2 image, i.e., 1-Blue, 2-Green, 3-Red, 4,5,6-Red-Edge, 7-NIR, 8-SWIR-1, 9-SWIR-2. MDA, mean decrease accuracy; OOB, out-of-bag samples, SWIR, short-wave infrared.</p>
Full article ">
26 pages, 26733 KiB  
Article
Land Cover Classification of Nine Perennial Crops Using Sentinel-1 and -2 Data
by James Brinkhoff, Justin Vardanega and Andrew J. Robson
Remote Sens. 2020, 12(1), 96; https://doi.org/10.3390/rs12010096 - 26 Dec 2019
Cited by 41 | Viewed by 9125
Abstract
Land cover mapping of intensive cropping areas facilitates an enhanced regional response to biosecurity threats and to natural disasters such as drought and flooding. Such maps also provide information for natural resource planning and analysis of the temporal and spatial trends in crop [...] Read more.
Land cover mapping of intensive cropping areas facilitates an enhanced regional response to biosecurity threats and to natural disasters such as drought and flooding. Such maps also provide information for natural resource planning and analysis of the temporal and spatial trends in crop distribution and gross production. In this work, 10 meter resolution land cover maps were generated over a 6200 km2 area of the Riverina region in New South Wales (NSW), Australia, with a focus on locating the most important perennial crops in the region. The maps discriminated between 12 classes, including nine perennial crop classes. A satellite image time series (SITS) of freely available Sentinel-1 synthetic aperture radar (SAR) and Sentinel-2 multispectral imagery was used. A segmentation technique grouped spectrally similar adjacent pixels together, to enable object-based image analysis (OBIA). K-means unsupervised clustering was used to filter training points and classify some map areas, which improved supervised classification of the remaining areas. The support vector machine (SVM) supervised classifier with radial basis function (RBF) kernel gave the best results among several algorithms trialled. The accuracies of maps generated using several combinations of the multispectral and radar bands were compared to assess the relative value of each combination. An object-based post classification refinement step was developed, enabling optimization of the tradeoff between producers’ accuracy and users’ accuracy. Accuracy was assessed against randomly sampled segments, and the final map achieved an overall count-based accuracy of 84.8% and area-weighted accuracy of 90.9%. Producers’ accuracies for the perennial crop classes ranged from 78 to 100%, and users’ accuracies ranged from 63 to 100%. This work develops methods to generate detailed and large-scale maps that accurately discriminate between many perennial crops and can be updated frequently. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map of the study area, showing a false color image ([R,G,B] = [NIR,R,G]) from a January 2019 Sentinel-2 mosaic.</p>
Full article ">Figure 2
<p>S1 and S2 image preprocessing method.</p>
Full article ">Figure 3
<p>Finding optimum parameters for segment generation, and using the segments to create segmented and K-means cluster images.</p>
Full article ">Figure 4
<p>Assessment of generated segments against a polygon outlining one of the 75 test fields. In this example, the total area error is the sum of the area of the red and green shapes, and there are three segments intercepting the polygon outlining the field.</p>
Full article ">Figure 5
<p>Web apps for collecting (<b>a</b>) training and (<b>b</b>) validation data.</p>
Full article ">Figure 6
<p>Generating band samples from training data and filtering samples to include only those belonging to points with evergreen or summer growth.</p>
Full article ">Figure 7
<p>Selecting optimum supervised classifiers and parameters per band combination.</p>
Full article ">Figure 8
<p>Method of generating pixel-based, object-based and refined object-based maps.</p>
Full article ">Figure 9
<p>The average number of segments per field polygon geometry, as a function of (<b>a</b>) the SNIC segmentation size parameter, and (<b>b</b>) the average segment area error. Multiple segment merging settings are shown. For example, 2 <math display="inline"><semantics> <mrow> <mo>×</mo> <mspace width="3.33333pt"/> <mo>Δ</mo> </mrow> </semantics></math> &lt; 0.1 merges adjacent segments if the mean NDVI difference across all months is less than 0.1, with two iterations.</p>
Full article ">Figure 10
<p>Analysis of K-means (k = 3) clustering of training points. (<b>a</b>) shows the number of training points per class and per cluster. (<b>b</b>) shows the maximum NDVI for each training point per cluster, with classes grouped into Perennial, Annual and Other. (<b>c</b>–<b>e</b>) show the time-series for training points from each cluster, including the mean and 1 and 2 standard deviations from the mean.</p>
Full article ">Figure 10 Cont.
<p>Analysis of K-means (k = 3) clustering of training points. (<b>a</b>) shows the number of training points per class and per cluster. (<b>b</b>) shows the maximum NDVI for each training point per cluster, with classes grouped into Perennial, Annual and Other. (<b>c</b>–<b>e</b>) show the time-series for training points from each cluster, including the mean and 1 and 2 standard deviations from the mean.</p>
Full article ">Figure 11
<p>Time series NDVI for the 12 classes within K-means cluster 0. The graphs show the mean value for all training points per class, along with one and two standard deviations from the mean. The classes shown are: (<b>a</b>) Almond, (<b>b</b>) Annual, (<b>c</b>) Cherry, (<b>d</b>) Citrus, (<b>e</b>) Forest, (<b>f</b>) Hazelnut, (<b>g</b>) Olive, (<b>h</b>) Other, (<b>i</b>) Plum, (<b>j</b>) Stonefruit, (<b>k</b>) Vineyard and (<b>l</b>) Walnut.</p>
Full article ">Figure 11 Cont.
<p>Time series NDVI for the 12 classes within K-means cluster 0. The graphs show the mean value for all training points per class, along with one and two standard deviations from the mean. The classes shown are: (<b>a</b>) Almond, (<b>b</b>) Annual, (<b>c</b>) Cherry, (<b>d</b>) Citrus, (<b>e</b>) Forest, (<b>f</b>) Hazelnut, (<b>g</b>) Olive, (<b>h</b>) Other, (<b>i</b>) Plum, (<b>j</b>) Stonefruit, (<b>k</b>) Vineyard and (<b>l</b>) Walnut.</p>
Full article ">Figure 12
<p>Sample classification accuracy for: (<b>a</b>) one month and (<b>b</b>) all combinations of two months S2 5-band images.</p>
Full article ">Figure 13
<p>Example 33 km<sup>2</sup> area of the classified maps. (<b>a</b>) K-means clustering with k = 3. (<b>b</b>) Object-based classified map using S1 radar time series (TS). (<b>c</b>) Pixel-based classification using S2(10) + S1(2) TS. (<b>d</b>) Proportion of pixels belonging to the majority class within each segment. (<b>e</b>) Refined object-based map using S2(10) + S1(2) TS features with proportion threshold of 0%. (<b>f</b>) Refined object-based map using S2(10) + S1(2) TS features with proportion threshold of 80%.</p>
Full article ">Figure 14
<p>Overall accuracy of object-based classified maps generated from the segmented image, assessed using the random validation segments. All results are using the SVM classifier with RBF apart from those marked “CART” and “RF”. Optimal SVM RBF parameters used for each classification are shown on the bars. Additional notes: <b>*1</b> CART parameters: MinSplitPoplulation:1, MinLeafPopulation:1, MaxDepth:10. <b>*2</b> RF parameters: NumberOfTrees:128, VariablesPerSplit:16, MinLeafPopulation:2. <b>*3</b> SVM RBF supervised classification applied over the whole map (not using K-means clustering to filter samples and classify some Annual and Other areas).</p>
Full article ">Figure 15
<p>Trading average producers’ and users’ accuracies, by adjusting the threshold for proportion of pixels in each segment belonging to the majority class.</p>
Full article ">Figure 16
<p>Final classified map using S2(10) + S1(2) time series features, and a majority pixel proportion threshold of 60%. The location of the detailed maps in <a href="#remotesensing-12-00096-f013" class="html-fig">Figure 13</a> is indicated by the black inset rectangle, and points show the location of the validation segments.</p>
Full article ">Figure 17
<p>Confusion matrices for the final map accuracy. (<b>a</b>) Count-based. (<b>b</b>) Area-based.</p>
Full article ">
20 pages, 7868 KiB  
Article
Combining Optical, Fluorescence, Thermal Satellite, and Environmental Data to Predict County-Level Maize Yield in China Using Machine Learning Approaches
by Liangliang Zhang, Zhao Zhang, Yuchuan Luo, Juan Cao and Fulu Tao
Remote Sens. 2020, 12(1), 21; https://doi.org/10.3390/rs12010021 - 18 Dec 2019
Cited by 108 | Viewed by 8482
Abstract
Maize is an extremely important grain crop, and the demand has increased sharply throughout the world. China contributes nearly one-fifth of the total production alone with its decreasing arable land. Timely and accurate prediction of maize yield in China is critical for ensuring [...] Read more.
Maize is an extremely important grain crop, and the demand has increased sharply throughout the world. China contributes nearly one-fifth of the total production alone with its decreasing arable land. Timely and accurate prediction of maize yield in China is critical for ensuring global food security. Previous studies primarily used either visible or near-infrared (NIR) based vegetation indices (VIs), or climate data, or both to predict crop yield. However, other satellite data from different spectral bands have been underutilized, which contain unique information on crop growth and yield. In addition, although a joint application of multi-source data significantly improves crop yield prediction, the combinations of input variables that could achieve the best results have not been well investigated. Here we integrated optical, fluorescence, thermal satellite, and environmental data to predict county-level maize yield across four agro-ecological zones (AEZs) in China using a regression-based method (LASSO), two machine learning (ML) methods (RF and XGBoost), and deep learning (DL) network (LSTM). The results showed that combining multi-source data explained more than 75% of yield variation. Satellite data at the silking stage contributed more information than other variables, and solar-induced chlorophyll fluorescence (SIF) had an almost equivalent performance with the enhanced vegetation index (EVI) largely due to the low signal to noise ratio and coarse spatial resolution. The extremely high temperature and vapor pressure deficit during the reproductive period were the most important climate variables affecting maize production in China. Soil properties and management factors contained extra information on crop growth conditions that cannot be fully captured by satellite and climate data. We found that ML and DL approaches definitely outperformed regression-based methods, and ML had more computational efficiency and easier generalizations relative to DL. Our study is an important effort to combine multi-source remote sensed and environmental data for large-scale yield prediction. The proposed methodology provides a paradigm for other crop yield predictions and in other regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The maize planting areas and four main agro-ecological zones in China.</p>
Full article ">Figure 2
<p>The correlations between yield and the transient variables (i.e., solar-induced chlorophyll (SIF), the enhanced vegetation index (EVI), and climate variables). *, ** and *** represent significance levels of <span class="html-italic">p</span> &lt; 0.05, <span class="html-italic">p</span> &lt; 0.01 and <span class="html-italic">p</span> &lt; 0.001, respectively; “NS” denotes significance levels above 0.05.</p>
Full article ">Figure 3
<p>Spatiotemporal correlations between satellite variables (EVI, (<b>a</b>,<b>b</b>); SIF, (<b>c</b>,<b>d</b>)) and yield. In the box plot, the horizontal lines show the maximum and minimum values; the middle line shows the median; the upper and lower edges of the boxes show the 75th and 25th percentiles, respectively; the gray square represents the mean; the right part is the spatial pattern of the correlation for the month with the highest correlation coefficient (the red circle in the left part).</p>
Full article ">Figure 4
<p>Spatiotemporal correlations between the selected climate variables (GDD, (<b>a</b>,<b>b</b>); KDD, (<b>c</b>,<b>d</b>); Pre, (<b>e</b>,<b>f</b>); Vpd, (<b>g</b>,<b>h</b>)) and yield.</p>
Full article ">Figure 5
<p>Comparison of the recorded and multi-model predicted yields. The <span class="html-italic">R<sup>2</sup></span> and <span class="html-italic">RMSE</span> were ten-fold cross-validated values.</p>
Full article ">Figure 6
<p>The spatial patterns of the recorded yield (<b>a</b>) and predicted yield for RF (<b>b</b>), XGBoost (<b>c</b>), and LSTM (<b>d</b>).</p>
Full article ">Figure 7
<p>The spatial patterns of the relative errors for random forest (RF) (<b>a</b>), Extreme gradient boosting (XGBoost) (<b>b</b>), and long short-term memory (LSTM) (<b>c</b>).</p>
Full article ">Figure 8
<p>(<b>a</b>) <span class="html-italic">R<sup>2</sup></span> for one specific stage of SIF combined with all climate variables and other data. The dashed line represents the result of using environmental data, excluding SIF. (<b>b</b>) <span class="html-italic">R<sup>2</sup></span> for SIF combined with one specific stage of climate variables and other data. The dashed line represents the result of using SIF and other data, excluding climate variables.</p>
Full article ">Figure 9
<p>Feature importance values for the top of 18 variables from XGBoost models in each agro-ecological zone. The red dashed line indicates the 10th variable.</p>
Full article ">
21 pages, 7740 KiB  
Article
Land Use Changes in the Zoige Plateau Based on the Object-Oriented Method and Their Effects on Landscape Patterns
by Ge Shen, Xiuchun Yang, Yunxiang Jin, Sha Luo, Bin Xu and Qingbo Zhou
Remote Sens. 2020, 12(1), 14; https://doi.org/10.3390/rs12010014 - 18 Dec 2019
Cited by 35 | Viewed by 4161
Abstract
Land use/land cover change (LUCC) is the most direct driving force of landscape pattern change. The Zoige Plateau is a natural ecosystem with the largest high-altitude swamp wetland in China and its land use pattern has undergone great changes in recent years, but [...] Read more.
Land use/land cover change (LUCC) is the most direct driving force of landscape pattern change. The Zoige Plateau is a natural ecosystem with the largest high-altitude swamp wetland in China and its land use pattern has undergone great changes in recent years, but how the changes of each land use type affect the landscape pattern is uncertain. Here, we used the object-oriented method to extract land use information in 2015. Then, combined with land use data, the land use change characteristics from 2000 to 2015 were analyzed. We used the correlation analysis method to analyze the effects of land use changes on landscape pattern systematically. Three key conclusions were reached. (1) Land use information for the Zoige Plateau could be extracted with high accuracy by combining the object-oriented method and support vector machine (SVM). The overall accuracy was 93.2% and the Kappa coefficient was 0.889. (2) The comprehensive dynamic degree of land use was the highest from 2010 to 2015. From 2000 to 2015, the wetland area decreased the fastest because 57.05% of the wetlands were transferred out. Construction land increased the fastest, and the transferred in area from grassland and farmland were the main reason. (3) The effects of unused land, farmland, and construction land on the overall landscape pattern were stronger than that of the other types, among which farmland had the most significant impact (with a correlation coefficient of 0.959, p < 0.001). The change of unused land was the most highly significant factor associated with the landscape area pattern, and both the water body and unused land showed strong correlations with landscape shape pattern change. This suggested that the effects of land use types occupying a relatively small area on the landscape pattern were intensified. This study will provide guidance for the environmental management of local land resources and other natural ecosystem areas. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Comparison of segmentation results at different scales based on Landsat 8 OLI. (<b>a</b>) Optimum segmentation. The red line represents the object border. (<b>b</b>) Over-segmentation. The red line represents the object border. (<b>c</b>) Under-segmentation. The yellow areas in Figure c represent one segmented object, which contains mixed pixels.</p>
Full article ">Figure 3
<p>Image segmentation sketch diagrams of representative areas based on Landsat 8 OLI. The red line represents the object border. (<b>a</b>) Farmland, construction land, and grassland; (<b>b</b>) grassland, water body, and construction land; (<b>c</b>) unused land, wetland, and grassland; (<b>d</b>) grassland and forest land.</p>
Full article ">Figure 4
<p>Land use results for the Zoige Plateau in 2000–2015. (<b>a</b>): Land use map in 2000; (<b>b</b>): Land use map in 2005; (<b>c</b>): Land use map in 2010; (<b>d</b>): Land use map in 2015. Land use data for 2000, 2005, and 2010 are obtained from “Ten-Year Evaluation of National Ecological Environmental Change (2000–2010)” database [<a href="#B47-remotesensing-12-00014" class="html-bibr">47</a>]. The data for 2015 is the classification result based on the object-oriented method.</p>
Full article ">Figure 5
<p>The upper part of the solid red line shows the results of rate of change in the proportional area of different land use in the Zoige Plateau in 2000–2015. The rate of change in the proportional area refers to the rate of proportion change of the land use type in two time periods. The below part of the solid red line show the results of land use proportional area of the Zoige Plateau in 2000–2015.</p>
Full article ">Figure 6
<p>Dynamic attitude of land use in the Zoige Plateau in 2000–2015 (%).</p>
Full article ">Figure 7
<p>Distribution of land use conversion in the Zoige Plateau. (<b>a</b>): Distribution of land use conversion during 2000–2005; (<b>b</b>): Distribution of land use conversion during 2005–2010; (<b>c</b>): Distribution of land use conversion during 2010–2015; (<b>d</b>): Distribution of land use conversion during 2000–2015. In the legend, W stands for wetland, G for grassland, Fo for forest land, Wb for water body, Fa for farmland, C for construction land and U for unused land, respectively. WG means that the wetlands are converted into grasslands. Other annotations also indicate such a transformation relationship.</p>
Full article ">Figure 8
<p>Temporal changes of 15 different landscape pattern indices of landscape scale. (<b>a</b>) Landscape pattern indices of the area category; (<b>b</b>) landscape pattern indices of the shape category; (<b>c</b>) landscape pattern indices of the accumulation and dispersion category; (<b>d</b>) Landscape pattern indices of the diversity category. Many of the landscape pattern indices results in the graphs have been processed to facilitate the display of change trends. For example, the values of NP were reduced 10,000-fold.</p>
Full article ">Figure 9
<p>Correlation results between the landscape pattern indices of the class scale and the corresponding landscape scale results. The ordinate represents 13 landscape pattern indices results of the landscape scale, and the abscissa represents landscape pattern indices results of different land use type. Different colors indicate the correlation results between the 13 landscape pattern indices rate of the class scale and the corresponding landscape scale results. *** <span class="html-italic">p</span> &lt; 0.001, ** <span class="html-italic">p</span> &lt; 0.01, * <span class="html-italic">p</span> &lt; 0.05, # <span class="html-italic">p</span> &lt; 0.10.</p>
Full article ">Figure 10
<p>Statistical data of Zoige County in 2000–2015. The data were obtained through the local bureau of statistics.</p>
Full article ">
16 pages, 13622 KiB  
Article
Assessment of Leaf Area Index of Rice for a Growing Cycle Using Multi-Temporal C-Band PolSAR Datasets
by Ze He, Shihua Li, Yong Wang, Yueming Hu and Feixiang Chen
Remote Sens. 2019, 11(22), 2640; https://doi.org/10.3390/rs11222640 - 12 Nov 2019
Cited by 11 | Viewed by 3481
Abstract
C-band polarimetric synthetic aperture radar (PolSAR) data has been previously explored for estimating the leaf area index (LAI) of rice. Although the rice-growing cycle was partially covered in most of the studies, details for each phenological phase need to be further characterized. Additionally, [...] Read more.
C-band polarimetric synthetic aperture radar (PolSAR) data has been previously explored for estimating the leaf area index (LAI) of rice. Although the rice-growing cycle was partially covered in most of the studies, details for each phenological phase need to be further characterized. Additionally, the selection and exploration of polarimetric parameters are not comprehensive. This study evaluates the potential of a set of polarimetric parameters derived from multi-temporal RADARSAT-2 datasets for rice LAI estimation. The relationships of rice LAI with backscattering coefficients and polarimetric decomposition parameters have been examined in a complete phenological cycle. Most polarimetric parameters had weak relationships (R2 < 0.30) with LAI at the transplanting, reproductive, and maturity phase. Stronger relationships (R2 > 0.50) were observed at the vegetative phase. HV/VV and RVI FD had significant relationships (R2 > 0.80) with rice LAI for the whole growth period. They were utilized to develop empirical models. The best LAI inversion performance (RMSE = 0.81) was obtained when RVI FD was used. The acceptable error demonstrated the possibility to use the decomposition parameters for rice LAI estimation. The HV/VV-based model had a slightly lower estimation accuracy (RMSE = 1.29) but can be a practical alternative considering the wide availability of dual-polarized datasets. Full article
Show Figures

Figure 1

Figure 1
<p>The study area is shown in the Landsat-8 true-color image acquired on 17 July 2016. Thirty sample sites are marked in yellow. Eleven ground control points are marked in red.</p>
Full article ">Figure 2
<p>Rice phenological phases identified at the 30 sample sites on the four dates of in situ observation.</p>
Full article ">Figure 3
<p>Growth status of rice plants at the (<b>a</b>) transplanting phase, (<b>b</b>) vegetative phase, (<b>c</b>) reproductive phase, and (<b>d</b>) maturity phase. The above are nadir-view photographs.</p>
Full article ">Figure 3 Cont.
<p>Growth status of rice plants at the (<b>a</b>) transplanting phase, (<b>b</b>) vegetative phase, (<b>c</b>) reproductive phase, and (<b>d</b>) maturity phase. The above are nadir-view photographs.</p>
Full article ">Figure 4
<p>The study area is shown in the Freeman–Durden decomposition image (26 July 2016). The red, green, and blue bands correspond to <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">D</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">V</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics></math>, respectively.</p>
Full article ">Figure 5
<p>Phenological variation of rice LAI derived from the ground measurement.</p>
Full article ">Figure 6
<p>Phenological variation of the basic polarimetric parameters (training dataset). (<b>a</b>) HV, VV, and HH backscattering coefficients, (<b>b</b>) entropy (<span class="html-italic">H</span>), anisotropy (<span class="html-italic">A</span>), and alpha angle (<span class="html-italic">α</span>), and (<b>c</b>) powers of the surface (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics></math>), double-bounce (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">D</mi> </msub> </mrow> </semantics></math>), and volume (<math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">V</mi> </msub> </mrow> </semantics></math>) scattering.</p>
Full article ">Figure 7
<p>Phenological variation of the polarimetric parameter combinations (training dataset). (<b>a</b>) VV/HH, HV/VV, and HV/HH, (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">V</mi> </msub> <mo>/</mo> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>/</mo> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">V</mi> </msub> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">D</mi> </msub> <mo>/</mo> <msub> <mi mathvariant="normal">P</mi> <mi mathvariant="normal">S</mi> </msub> </mrow> </semantics></math>, (<b>c</b>) <math display="inline"><semantics> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">V</mi> <msub> <mi mathvariant="normal">I</mi> <mi mathvariant="sans-serif">σ</mi> </msub> </mrow> </semantics></math>, (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">V</mi> <mi mathvariant="normal">I</mi> </mrow> <mrow> <mi mathvariant="normal">C</mi> <mi mathvariant="normal">P</mi> </mrow> </msub> </mrow> </semantics></math>, (<b>e</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi mathvariant="normal">R</mi> <mi mathvariant="normal">V</mi> <mi mathvariant="normal">I</mi> </mrow> <mrow> <mi mathvariant="normal">F</mi> <mi mathvariant="normal">D</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 8
<p>Scatterplots and trends of rice LAI (training dataset) as a function of (<b>a</b>) HV/VV or (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mrow> <mi>RVI</mi> </mrow> <mrow> <mi>FD</mi> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 9
<p>Measured and modeled LAIs. The black diagonal is the 1:1 line.</p>
Full article ">
21 pages, 1999 KiB  
Article
Assessing the Impact of Satellite Revisit Rate on Estimation of Corn Phenological Transition Timing through Shape Model Fitting
by Emily Myers, John Kerekes, Craig Daughtry and Andrew Russ
Remote Sens. 2019, 11(21), 2558; https://doi.org/10.3390/rs11212558 - 31 Oct 2019
Cited by 15 | Viewed by 4410
Abstract
Agricultural monitoring is an important application of earth-observing satellite systems. In particular, image time-series data are often fit to functions called shape models that are used to derive phenological transition dates or predict yield. This paper aimed to investigate the impact of imaging [...] Read more.
Agricultural monitoring is an important application of earth-observing satellite systems. In particular, image time-series data are often fit to functions called shape models that are used to derive phenological transition dates or predict yield. This paper aimed to investigate the impact of imaging frequency on model fitting and estimation of corn phenological transition timing. Images (PlanetScope 4-band surface reflectance) and in situ measurements (Soil Plant Analysis Development (SPAD) and leaf area index (LAI)) were collected over a corn field in the mid-Atlantic during the 2018 growing season. Correlation was performed between candidate vegetation indices and SPAD and LAI measurements. The Normalized Difference Vegetation Index (NDVI) was chosen for shape model fitting based on the ground truth correlation and initial fitting results. Plot-average NDVI time-series were cleaned and fit to an asymmetric double sigmoid function, from which the day of year (DOY) of six different function parameters were extracted. These points were related to ground-measured phenological stages. New time-series were then created by removing images from the original time-series, so that average temporal spacing between images ranged from 3 to 24 days. Fitting was performed on the resampled time-series, and phenological transition dates were recalculated. Average range of estimated dates increased by 1 day and average absolute deviation between dates estimated from original and resampled time-series data increased by 1/3 of a day for every day of increase in average revisit interval. In the context of this study, higher imaging frequency led to greater precision in estimates of shape model fitting parameters used to estimate corn phenological transition timing. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>a</b>) View of the field and plot locations. (<b>b</b>) contrast-enhanced, closer view of the plots. Both images show RGB surface reflectance, derived from imagery collected by a PlanetScope satellite on 16 June 2018. Each plot is represented as a white box. The smaller plots are roughly 9 by 21 m, and the larger plots are roughly 18 by 21 m. The plots in the third and fourth columns from the left (the two columns of small plots closest to the larger plots) were excluded from analysis due to treatment errors.</p>
Full article ">Figure 2
<p>Asymmetric double sigmoid function fit to NDVI time-series for one of the corn subplots (Plot 74), with function parameters labeled. This figure shows original (non-resampled) time-series data, which had an average revisit interval of three days.</p>
Full article ">Figure 3
<p>Visual representation of the temporal resampling procedure and analysis. All steps are described in more detail in the Methods section. Note: This analysis was performed separately for each of the 32 plots shown in <a href="#remotesensing-11-02558-f001" class="html-fig">Figure 1</a>. After the end results (range, maximum absolute deviation, and average absolute deviation as a function of revisit interval) were obtained, they were averaged across all 32 plots.</p>
Full article ">Figure 4
<p>Four different plot-average NDVI time-series fit to the asymmetric double sigmoid function. Plot 63 was non-irrigated and treated with 200% N, Plot 94 was non-irrigated and treated with 0% N, Plot 82 was irrigated and treated with 50% N, and Plot 24 was irrigated and treated with 75% N.</p>
Full article ">Figure 5
<p>Four different plot-average NDVI time-series resampled to an 8-day average revisit interval and fit to the asymmetric double sigmoid function.</p>
Full article ">Figure 6
<p>Four different plot-average NDVI time-series resampled to a 13-day average revisit interval and fit to the asymmetric double sigmoid function.</p>
Full article ">Figure 7
<p>Four different plot-average NDVI time-series resampled to an 18-day average revisit interval and fit to the asymmetric double sigmoid function.</p>
Full article ">Figure 8
<p>Four different plot-average NDVI time-series resampled to a 23-day average revisit interval and fit to the asymmetric double sigmoid function.</p>
Full article ">Figure 9
<p>Average absolute deviations of <math display="inline"><semantics> <mrow> <mi>D</mi> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>D</mi> <mi>i</mi> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mn>3</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <msub> <mi>D</mi> <mi>d</mi> </msub> </semantics></math>, and <math display="inline"><semantics> <mrow> <mi>D</mi> <mn>4</mn> </mrow> </semantics></math> from the original <span class="html-italic">D</span>-values, shown as a function of average temporal revisit rate. Each data point represents the deviation of the calculated <span class="html-italic">D</span>-value across all resampled time-series at that particular image frequency. These statistics were collected separately for each individual plot and were then averaged for analysis; the data points presented above represent the average of statistics across all 32 plots.</p>
Full article ">Figure 10
<p>Range of <span class="html-italic">D</span>-values after temporal resampling, maximum absolute deviation of post-resampling <span class="html-italic">D</span>-values from original <span class="html-italic">D</span>-value, and average absolute deviation of post-resampling <span class="html-italic">D</span>-values from original <span class="html-italic">D</span>-value, shown as functions of average revisit interval. These statistics were collected separately for each individual <span class="html-italic">D</span>-value and each individual plot and were then averaged for analysis; the data points presented above represent the average of statistics across all six <span class="html-italic">D</span>-values and all 32 plots.</p>
Full article ">
19 pages, 6611 KiB  
Article
Estimating Rainfall Interception of Vegetation Canopy from MODIS Imageries in Southern China
by Jianping Wu, Liyang Liu, Caihong Sun, Yongxian Su, Changjian Wang, Ji Yang, Jiayuan Liao, Xiaolei He, Qian Li, Chaoqun Zhang and Hongou Zhang
Remote Sens. 2019, 11(21), 2468; https://doi.org/10.3390/rs11212468 - 23 Oct 2019
Cited by 18 | Viewed by 4751
Abstract
The interception of rainfall by vegetation canopies plays an important role in the hydrologic process of ecosystems. Most estimates of canopy rainfall interception in present studies are mainly through field observations at the plot region. However, it is difficult, yet important, to map [...] Read more.
The interception of rainfall by vegetation canopies plays an important role in the hydrologic process of ecosystems. Most estimates of canopy rainfall interception in present studies are mainly through field observations at the plot region. However, it is difficult, yet important, to map the regional rainfall interception by vegetation canopy at a larger scale, especially in the southern rainy areas of China. To obtain a better understanding of the spatiotemporal variation of vegetation canopy rainfall interception with regard to the basin scale in this region, we extended a rainfall interception model by combining the observed rainfall data and moderate resolution imaging spectroradiometer leaf area index (MODIS_LAI) data to quantitatively estimate the vegetation canopy rainfall interception rate (CRIR) at small/medium basin scales in Guangdong Province, which is undergoing large changes in vegetation cover due to rapid urban expansion in the area. The results showed that the CRIR in Guangdong declined continuously during 2004–2012, but increased slightly in 2016, and the spatial variability of CRIR showed a diminishing yearly trend. The CRIR also exhibited a distinctive spatial pattern, with a higher rate to the east and west of the mountainous areas and a lower rate in the central mountainous and coastal areas. This pattern was more closely related to the spatial variation of the LAI than that of rainfall due to frequent extreme rainfall events saturating vegetation leaves. Further analysis demonstrated that forest coverage, instead of background climate, has a certain impact on the canopy rainfall interception, especially the proportion of broad-leaved forests in the basin, but more in-depth study is warranted in the future. In conclusion, the results of this study provide insights into the spatiotemporal variation of canopy rainfall interception at the basin scale of the Guangdong Province, and suggest that forest cover should be increased by adjusting the species composition to increase the proportion of native broad-leaved species based on the local condition within the basin. In addition, these results would be helpful in accurately assessing the impacts of forest ecosystems on regional water cycling, and provide scientific and practical implications for water resources management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Basins in Guangdong. BJ: Beijiang basin, DJ: Dongjiang basin, HJ: Hanjiang basin, XJ: Xijiang basin, PR: Pearl River basin, MYJ: Moyangjiang basin, JJ: Jianjiang river basin.</p>
Full article ">Figure 2
<p>Spatial patterns of annual rainfall in Guangdong.</p>
Full article ">Figure 3
<p>Spatial patterns of annual rainfall in the seven major basins in Guangdong.</p>
Full article ">Figure 4
<p>Spatial patterns of the annual mean leaf area index (LAI) in Guangdong.</p>
Full article ">Figure 5
<p>Spatial patterns of the annual mean LAI in the seven major basins in Guangdong.</p>
Full article ">Figure 6
<p>Spatial pattern of subregion with different rainfall interception in Guangdong in 2004.</p>
Full article ">Figure 7
<p>Variation of monthly mean canopy rainfall interception rates in Guangdong.</p>
Full article ">Figure 8
<p>Spatial patterns of annual mean CRIRs in Guangdong.</p>
Full article ">Figure 9
<p>Spatial patterns of the propensity score (SLOPE) for CRIRs in Guangdong.</p>
Full article ">Figure 10
<p>Spatial patterns of the annual mean CRIR in seven major river basins in Guangdong.</p>
Full article ">Figure 11
<p>Basin-pairs of the four climatic zones in Guangdong.</p>
Full article ">Figure 12
<p>Annual mean variation of the CRIR in basin-pairs in the four climatic zones in Guangdong.</p>
Full article ">Figure 13
<p>The relationship between the CRIR and forest coverage of all integrated basins in the four climatic zones (<b>a</b>), and the percentage of forest types in different climatic zones (<b>b</b>). MSZ: Middle subtropical zone, NSSZ: Northern south subtropical zone, SSSZ: Southern south subtropical zone, NTZ: North tropical zone.</p>
Full article ">
21 pages, 6017 KiB  
Article
Combining Evapotranspiration and Soil Apparent Electrical Conductivity Mapping to Identify Potential Precision Irrigation Benefits
by Mallika A. Nocco, Samuel C. Zipper, Eric G. Booth, Cadan R. Cummings, Steven P. Loheide II and Christopher J. Kucharik
Remote Sens. 2019, 11(21), 2460; https://doi.org/10.3390/rs11212460 - 23 Oct 2019
Cited by 10 | Viewed by 4202
Abstract
Precision irrigation optimizes the spatiotemporal application of water using evapotranspiration (ET) maps to assess water stress or soil apparent electrical conductivity (ECa) maps as a proxy for plant available water content. However, ET and ECa maps are rarely used together. [...] Read more.
Precision irrigation optimizes the spatiotemporal application of water using evapotranspiration (ET) maps to assess water stress or soil apparent electrical conductivity (ECa) maps as a proxy for plant available water content. However, ET and ECa maps are rarely used together. We developed high-resolution ET and ECa maps for six irrigated fields in the Midwest United States between 2014–2016. Our research goals were to (1) validate ET maps developed using the High-Resolution Mapping of EvapoTranspiration (HRMET) model and aerial imagery via comparison with ground observations in potato, sweet corn, and pea agroecosystems; (2) characterize relationships between ET and ECa; and (3) identify potential precision irrigation benefits across rotations. We demonstrated the synergy of combined ET and ECa mapping for evaluating whether intrafield differences in ECa correspond to actual water use for different crop rotations. We found that ET and ECa have stronger relationships in sweet corn and potato rotations than field corn. Thus, sweet corn and potato crops may benefit more from precision irrigation than field corn, even when grown rotationally on the same field. We recommend that future research consider crop rotation, intrafield soil variability, and existing irrigation practices together when determining potential water use, savings, and yield gains from precision irrigation. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study site at the Central Sands region in the Wisconsin, U.S.A., as adapted from Nocco et al., 2019. Georeferencing conducted using 2013 National Agriculture Imagery Program data for Portage County, WI.</p>
Full article ">Figure 2
<p>Maps of apparent soil electrical conductivity (EC<sub>a</sub>), as adapted from Nocco et al., 2019. Field labels refer to <a href="#remotesensing-11-02460-f001" class="html-fig">Figure 1</a>; EC<sub>a</sub> surveys were conducted on in 13 April 2015 for fields P, L, E, W and on 21 April 2016 for fields H and G.</p>
Full article ">Figure 3
<p>Leaf area index (m<sup>2</sup>m<sup>−2</sup>) for fields H, G, P, L, E, and W for twelve airborne missions in 2014–2016. All maps are linearly scaled to the same color bar. Blank spaces indicate missing imagery because of cloud interference during missions.</p>
Full article ">Figure 4
<p>Cumulative precipitation (mm), irrigation (mm), and reference evapotranspiration (mm, ‘RET’) data for 1 June–31 August for 2014–2016 growing seasons on Isherwood Farms. Irrigation time series for fields H, G, P, L, E, W are labelled as H_irrig, G_irrig, P_irrig, L_irrig, E_irrig, and W_irrig, respectively. Airborne missions are demarcated by red vertical lines.</p>
Full article ">Figure 5
<p>Comparison of HRMET and Shuttleworth–Wallace ET rates (mm hr<sup>−1</sup>) for irrigated sweet corn, potatoes, peas, and pearl millet cropping systems. The dashed lines represent a 1:1 fit ± 25% error.</p>
Full article ">Figure 6
<p>HRMET-calculated mean ET rates (mm hr<sup>−1</sup>) for fields H, G, P, L, E, and W for 12 airborne missions in 2014–2016. All maps are linearly scaled to the same color bar. Blank spaces indicate missing imagery because of cloud interference during missions.</p>
Full article ">Figure 7
<p>Standard deviation of HRMET-calculated ET rates (mm hr<sup>−1</sup>) for fields H, G, P, L, E, and W for 12 airborne missions in 2014–2016. All maps are linearly scaled to the same color bar. Blank spaces indicate missing imagery because of cloud interference during missions.</p>
Full article ">Figure 8
<p>Maps of relative evapotranspiration (ET<sub>R</sub>, unitless) for fields H, G, P, L, E, and W for 12 airborne missions in 2014–2016. Color bars represent linearly normalized mean ET<sub>R</sub> rates to the 5th (red) and 95th (blue) percentile for each field. Blank spaces indicate missing imagery because of cloud interference during missions.</p>
Full article ">Figure 9
<p>Kendall’s tau correlation coefficient matrices for fields H, G, P, L, E, and W Correlations are for the following parameters: shallow and deep apparent electrical conductivity (EC_sh and EC_dp, respectively) and relative evapotranspiration for each crop rotation (‘sweet corn’, ‘potato’, ‘pea’, and ‘field corn’). Correlations are all statistically significant at an alpha level of 0.001 unless they are crossed out with ‘X’ symbols.</p>
Full article ">Figure 10
<p>Kendall’s tau correlation coefficient matrices for all six fields (H, G, P, L, E, and W) together. Correlations are for the following parameters: shallow and deep apparent electrical conductivity (EC_sh and EC_dp, respectively) and relative evapotranspiration for each crop rotation (‘sweet corn’, ‘potato’, and ‘field corn’). Correlations are all statistically significant at an alpha level of 0.001.</p>
Full article ">
21 pages, 4749 KiB  
Article
Crop Yield Estimation Using Time-Series MODIS Data and the Effects of Cropland Masks in Ontario, Canada
by Jiangui Liu, Jiali Shang, Budong Qian, Ted Huffman, Yinsuo Zhang, Taifeng Dong, Qi Jing and Tim Martin
Remote Sens. 2019, 11(20), 2419; https://doi.org/10.3390/rs11202419 - 18 Oct 2019
Cited by 39 | Viewed by 6845
Abstract
This study investigated the estimation of grain yields of three major annual crops in Ontario (corn, soybean, and winter wheat) using MODIS reflectance data extracted with a general cropland mask and crop-specific masks. Time-series two-band enhanced vegetation index (EVI2) was derived from the [...] Read more.
This study investigated the estimation of grain yields of three major annual crops in Ontario (corn, soybean, and winter wheat) using MODIS reflectance data extracted with a general cropland mask and crop-specific masks. Time-series two-band enhanced vegetation index (EVI2) was derived from the 8 day composite 250 m MODIS reflectance data from 2003 to 2016. Using a general cropland mask, the strongest positive linear correlation between crop yields and EVI2 was observed at the end of July to early August, whereas a negative correlation was observed in spring. Using crop-specific masks, the time of the strongest positive linear correlation for winter wheat was found between mid-May and early June, corresponding to peak growth stages of the crop. EVI2 derived at peak growth stages of a crop provided good predictive capability for grain yield estimation, with considerable inter-annual variation. A multiple linear regression model was established for county-level yield estimation using EVI2 at peak growth stages and the year as independent variables. The model accounted for the spatiotemporal variability of grain yields of about 30% and 47% for winter wheat, 63% and 65% for corn, and 59% and 64% for soybean using the general cropland mask and crop-specific masks, respectively. A negative correlation during the spring indicated that vegetation index extracted using a general cropland mask should be used with caution in regions with mixed crops, as factors other than the growth conditions of the targeted crops may also be captured by remote sensing data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area in Ontario, Canada. The dark lines outline the five regions in Ontario; the light gray polygons are counties included in the study, and the dark gray polygons are three counties representative of Southern (Chatham-Kent), Western (Perth), and Central (Durham) Ontario.</p>
Full article ">Figure 2
<p>Comparison of county level crop area proportions estimated annually using the fuzzy decision tree classifier and reported by the Ontario Ministry of Agriculture, Food and Rural Affairs (OMAFRA) for the period between 2003 and 2016.</p>
Full article ">Figure 3
<p>Crop growth profiles from March to October, illustrated using two-band enhanced vegetation index (EVI2) extracted for winter wheat (<b>a</b>), corn (<b>b</b>), soybean (<b>c</b>), and the three crops combined (<b>d</b>). The curves were derived from 2016 for three representative counties in Southern (Chathem-Kent), Western (Perth), and Central (Durham) Ontario (refer to <a href="#remotesensing-11-02419-f001" class="html-fig">Figure 1</a>).</p>
Full article ">Figure 4
<p>Correlation coefficient between crop yield and EVI2 obtained through the multiple linear regression model, using data for all years (2003–2016). EVI2 was extracted using general cropland mask (GM) and crop-specific masks (SM).</p>
Full article ">Figure 5
<p>Annual variation of the strongest correlation between crop yields and time-series EVI2, extracted using a general cropland mask (GM) and crop-specific masks (SM) for winter wheat (<b>a</b>,<b>d</b>), corn (<b>b</b>,<b>e</b>), and soybean (<b>c</b>,<b>f</b>). Yield: average county level yield; <span class="html-italic">R</span><sup>2</sup>: coefficient of determiantion; CV: coefficient of variation of yields; RRMSE: root mean square error relative to average yield; MRAE: mean relative absolute error. Samples from the three agricultural regions were analyzed together.</p>
Full article ">Figure 6
<p>Example annual relationships between crop yields and EVI2 derived using crop specific masks for the three annual crops; only data from 3 years (2006, 2011, and 2016) are shown; DOY refers to MODIS nominal composite day-of-year with the strongest correlation between EVI2 and crop yields.</p>
Full article ">Figure 6 Cont.
<p>Example annual relationships between crop yields and EVI2 derived using crop specific masks for the three annual crops; only data from 3 years (2006, 2011, and 2016) are shown; DOY refers to MODIS nominal composite day-of-year with the strongest correlation between EVI2 and crop yields.</p>
Full article ">Figure 7
<p>Relationships between reported and estimated crop yields at the county level for the period from 2003 to 2016. Crop yields were estimated using a multiple linear regression model from average EVI2 at the peak growth stages and year as independent variables. EVI2 was extracted using a general cropland mask (GM) and crop-specific masks (SM).</p>
Full article ">Figure 8
<p>Relationships of county level areal proportions of corn and soybean combined with (<b>a</b>) EVI2 at day-of-year (DOY) 153 and (<b>b</b>) corn yield.</p>
Full article ">Figure 9
<p>Comparison of yield estimation error, that is, mean relative absolute error (MRAE, %), using 250 m MODIS EVI2 using year-specific models and an all-year model. EVI2 was extracted using a general cropland mask and crop-specific mask for 2003–2016. The circles show the results of the years when there is a large difference between the all-year model and the year-specific model using the general cropland mask.</p>
Full article ">Figure 10
<p>Monthly precipitation for July–September from the weather station at London, Ontario (Station identifier: 6144475), versus correspondent long term normals from 1985 to 2016 shown with the same color but in dashed lines.</p>
Full article ">
25 pages, 4578 KiB  
Article
Large Scale Agricultural Plastic Mulch Detecting and Monitoring with Multi-Source Remote Sensing Data: A Case Study in Xinjiang, China
by Yuankang Xiong, Qingling Zhang, Xi Chen, Anming Bao, Jieyun Zhang and Yujuan Wang
Remote Sens. 2019, 11(18), 2088; https://doi.org/10.3390/rs11182088 - 6 Sep 2019
Cited by 47 | Viewed by 6277
Abstract
Plastic mulching has been widely practiced in crop cultivation worldwide due to its potential to significantly increase crop production. However, it also has a great impact on the regional climate and ecological environment. More importantly, it often leads to unexpected soil pollution due [...] Read more.
Plastic mulching has been widely practiced in crop cultivation worldwide due to its potential to significantly increase crop production. However, it also has a great impact on the regional climate and ecological environment. More importantly, it often leads to unexpected soil pollution due to fine plastic residuals. Therefore, accurately and timely monitoring of the temporal and spatial distribution of plastic mulch practice in large areas is of great interest to assess its impacts. However, existing plastic-mulched farmland (PMF) detecting efforts are limited to either small areas with high-resolution images or coarse resolution images of large areas. In this study, we examined the potential of cloud computing and multi-temporal, multi-sensor satellite images for detecting PMF in large areas. We first built the plastic-mulched farmland mapping algorithm (PFMA) rules through analyzing its spectral, temporal, and auxiliary features in remote sensing imagery with the classification and regression tree (CART). We then applied the PFMA in the dry region of Xinjiang, China, where a water resource is very scarce and thus plastic mulch has been intensively used and its usage is expected to increase significantly in the near future. The experimental results demonstrated that the PFMA reached an overall accuracy of 92.2% with a producer’s accuracy of 97.6% and a user’s accuracy of 86.7%, and the F-score was 0.914 for the PMF class. We further monitored and analyzed the dynamics of plastic mulch practiced in Xinjiang by applying the PFMA to the years 2000, 2005, 2010, and 2015. The general pattern of plastic mulch usage dynamic in Xinjiang during the period from 2000 to 2015 was well captured by our multi-temporal analysis. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The use of plastic film in China since 2000. (<b>a</b>) The trend of the use of plastic film in China; (<b>b</b>) the growth of the coverage area of the plastic mulch in China; (<b>c</b>) the plastic film uses in Xinjiang, China; and (<b>d</b>) the ratio of plastic mulch use to the total plastic film.</p>
Full article ">Figure 2
<p>Location of the study area (the imagery displayed on the right panel is a true color mosaic generated on the Google Earth) and the three main types of plasticulture in Xinjiang, China as in the year 2018. (<b>a</b>) The location of the study area; (<b>b</b>) the spatial distribution of samples points in the study area; (<b>c</b>) greenhouses in Mido District, Urumqi (44°4′15″N, 87°30′19″E); (<b>d</b>) low tunnels in Changji City (44°3′2″N, 87°21′43″E); and (<b>e</b>,<b>f</b>) plastic mulch in Changji City (44°13′55″N, 86°37′36″E).</p>
Full article ">Figure 3
<p>Overview of the methodology for PMF mapping and plastic-mulched farmland mapping algorithm (PFMA) for Xinjiang.</p>
Full article ">Figure 4
<p>Spectral separability of different ground objects on the different bands of Sentinel-2 MSI imagery (1: Blue; 2: Red; 3: NIR; 4: SWIR2).</p>
Full article ">Figure 5
<p>Two types of PMF as well as bare soil and the respective spectral reflectance curves. (<b>a</b>,<b>b</b>) The true color and arbitrary color composite (R = SWIR2, G = NIR, B = Blue) Sentinel-2 MSI image of PMF1 in the Shihezi City, Xinjiang, China; (<b>c</b>,<b>d</b>) the true color and arbitrary color composite (R = SWIR2, G = NIR, B = Blue) Sentinel-2 MSI image of PMF2 in Shihezi City, Xinjiang, China; and (<b>e</b>) the spectral reflectance curves of bare soil, PMF1 and PMF2 by Sentinel-2 MSI imagery (1: Blue; 2: Red; 3: NIR; 4: SWIR2).</p>
Full article ">Figure 6
<p>The PFMA established for each zone.</p>
Full article ">Figure 7
<p>The spatial distribution of PMF extracted from PFMA in Xinjiang in 2016 (PMF_1: plastic-mulched farmland_1 areas; PMF_2: plastic-mulched farmland_2 areas; NLUD-2016: National Land Use Dataset in China in 2016): (<b>a</b>) The PMF extraction results in the whole Xinjiang (the red color represents PMF in the year 2016 as detected with our proposed method, and the green color represents cropland from the NLUD-2016, which shows the overall extent of agriculture in Xinjiang, China, as of the year 2016); (<b>b</b>) the PMF extraction results in the southern part of Xinjiang; (<b>c</b>) the PMF extraction results in the northern part of Xinjiang; (<b>d</b>,<b>g</b>) true color composite of sentinel-2 MSI; (<b>e</b>,<b>h</b>) arbitrary color composite of Sentinel-2 (R = SWIR 2, G = NIR, B = Blue); and (<b>f</b>,<b>i</b>) comparison of PMF extraction results with Sentinel-2 true color composite data.</p>
Full article ">Figure 8
<p>The possible factors behind the spatial distribution of PMF. (<b>a</b>) The administrative division of Xinjiang; (<b>b</b>) the ratio of PMF to total cropland (the cropland data from NLUD-2016 in 2016); (<b>c</b>) the ratio of corn and cotton area to all crops area in Xinjiang; and (<b>d</b>) the distribution of PMF2 in the Tarbagatay Prefecture wind district.</p>
Full article ">Figure 9
<p>The spatial distribution of PMF in Xinjiang in different years (PMF extracted from PFMA).</p>
Full article ">Figure 10
<p>A visual comparison of different results (shown in white color) from the different algorithms overlaid on the false color composite of Landsat-5 TM (R = SWIR 2, G = NIR, B = Blue).</p>
Full article ">
30 pages, 23602 KiB  
Article
Intercomparison of AVHRR GIMMS3g, Terra MODIS, and SPOT-VGT NDVI Products over the Mongolian Plateau
by Yongqing Bai, Yaping Yang and Hou Jiang
Remote Sens. 2019, 11(17), 2030; https://doi.org/10.3390/rs11172030 - 29 Aug 2019
Cited by 34 | Viewed by 4918
Abstract
The rapid development of remote sensing technology has promoted the generation of different vegetation index products, resulting in substantive accomplishment in comprehensive economic development and monitoring of natural environmental changes. The results of scientific experiments based on various vegetation index products are also [...] Read more.
The rapid development of remote sensing technology has promoted the generation of different vegetation index products, resulting in substantive accomplishment in comprehensive economic development and monitoring of natural environmental changes. The results of scientific experiments based on various vegetation index products are also different with the variation of time and space. In this work, the consistency characteristics among three global normalized difference vegetation index (NDVI) products, namely, GIMMS3g NDVI, MOD13A3 NDVI, and SPOT-VGT NDVI, are intercompared and validated based on Landsat 8 NDVI at biome and regional scale over the Mongolian Plateau (MP) from 2000 to 2014 by decomposing time series datasets. The agreement coefficient (AC) and statistical scores such as Pearson correlation coefficient, root mean square error (RMSE), mean bias error (MBE), and standard deviation (STD) are used to evaluate the consistency between three NDVI datasets. Intercomparison results reveal that GIMMS3g NDVI has the highest values basically over the MP, while SPOT-VGT NDVI has the lowest values. The spatial distribution of AC values between various NDVI products indicates that the three NDVI datasets are highly consistent with each other in the northern regions of the MP, and MOD13A3 NDVI and SPOT-VGT NDVI have better consistency in expressing vegetation cover and change trends due to the highest proportions of pixels with AC values greater than 0.6. However, the trend components of decomposed NDVI sequences show that SPOT-VGT NDVI values are about 0.02 lower than the other two datasets in the whole variation periods. The zonal characteristics show that GIMMS3g NDVI in January 2013 is significantly higher than those of the other two datasets. However, in July 2013, the three datasets are remarkably consistent because of the greater vegetation coverage. Consistency validation results show that values of SPOT-VGT NDVI agree more with Landsat 8 NDVI than GIMMS3g NDVI and MOD13A3 NDVI, and the consistencies in the northeast of the MP are higher than northwest regions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Geographic location and elevation of the Mongolian Plateau (MP).</p>
Full article ">Figure 2
<p>Land cover map of the MP based on the ESA GlobCover 2009. The numbers in the legend represent different types of land cover, corresponding to the land cover code in the <a href="#remotesensing-11-02030-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 3
<p>Biome distribution of the MP based on the World Wildlife Fund (WWF) terrestrial ecoregion map.</p>
Full article ">Figure 4
<p>Spatial distribution of the agreement coefficient (AC) between (<b>a</b>) GIMMS3g NDVI and MOD13A3 NDVI, (<b>b</b>) GIMMS3g NDVI and SPOT-VGT NDVI, and (<b>c</b>) MOD13A3 NDVI and SPOT-VGT NDVI.</p>
Full article ">Figure 5
<p>Spatial distribution of the mean bias error (MBE) between (<b>a</b>) GIMMS3g NDVI and MOD13A3 NDVI, (<b>b</b>) GIMMS3g NDVI and SPOT-VGT NDVI, and (<b>c</b>) MOD13A3 NDVI and SPOT-VGT NDVI.</p>
Full article ">Figure 6
<p>Time series of decomposed of GIMMS3g NDVI, MOD13A3 NDVI, and SPOT-VGT NDVI in the MP from February 2000 to May 2014. (<b>a</b>) Original observation values, (<b>b</b>) variation trends, (<b>c</b>) seasonal periodic sequence, and (<b>d</b>) the residual parts of three datasets.</p>
Full article ">Figure 7
<p>Time series of deseasonalized NDVI values in the biomes of (<b>a</b>) montane grasslands and shrublands, (<b>b</b>) temperate conifer forests, (<b>c</b>) boreal forest/taiga, (<b>d</b>) temperate grasslands, savannas, and shrublands, (<b>e</b>) deserts and xeric shrublands, (<b>f</b>) temperate broadleaf and mixed forests and (<b>g</b>) tundra in the MP.</p>
Full article ">Figure 8
<p>Average NDVI values at different latitudes from 37.4°N to 58.4°N in (<b>a</b>) January 2013 and (<b>b</b>) July 2013.</p>
Full article ">Figure 9
<p>Average NDVI values at different longitudes from 87.8°E to 126.1°E in (<b>a</b>) January 2013 and (<b>b</b>) July 2013.</p>
Full article ">Figure 10
<p>Scatter plots and fitting curves of average NDVI values at different elevations from 95 m to 3788 m in (<b>a</b>) January 2013 and (<b>b</b>) July 2013.</p>
Full article ">Figure 10 Cont.
<p>Scatter plots and fitting curves of average NDVI values at different elevations from 95 m to 3788 m in (<b>a</b>) January 2013 and (<b>b</b>) July 2013.</p>
Full article ">Figure 11
<p>Spatial distribution of 35 Landsat 8 OLI/TIRS images for NDVI verification.</p>
Full article ">Figure 12
<p>Consistency test results of Landsat 8 NDVI and the three NDVI products. (<b>a</b>) Pearson correlation coefficient; (<b>b</b>) root mean square error (RMSE); (<b>c</b>) mean bias error. The number of Landsat images in the figures corresponded to the number field in <a href="#remotesensing-11-02030-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 13
<p>Contrastive analysis of GIMMS3g NDVI (<b>a</b>–<b>d</b>), MOD13A3 NDVI (<b>e</b>–<b>h</b>), SPOT-VGT NDVI (<b>i</b>–<b>l</b>) and Landsat 8 NDVI (<b>m</b>–<b>p</b>) in different time and space ranges. The Path/Row in the figure represents the path and row number of Landsat 8 images.</p>
Full article ">Figure 14
<p>Trend components of decomposed average values of GIMMS3g NDVI, MOD13A3 NDVI, and SPOT-VGT NDVI in different spatial resolution (1 km and 8 km) in the MP.</p>
Full article ">
22 pages, 4304 KiB  
Article
Field-Scale Crop Seeding Date Estimation from MODIS Data and Growing Degree Days in Manitoba, Canada
by Taifeng Dong, Jiali Shang, Budong Qian, Jiangui Liu, Jing M. Chen, Qi Jing, Brian McConkey, Ted Huffman, Bahram Daneshfar, Catherine Champagne, Andrew Davidson and Dan MacDonald
Remote Sens. 2019, 11(15), 1760; https://doi.org/10.3390/rs11151760 - 26 Jul 2019
Cited by 14 | Viewed by 5857
Abstract
Information on crop seeding date is required in many applications such as crop management and yield forecasting. This study presents a novel method to estimate crop seeding date at the field level from time-series 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) data and growing [...] Read more.
Information on crop seeding date is required in many applications such as crop management and yield forecasting. This study presents a novel method to estimate crop seeding date at the field level from time-series 250-m Moderate Resolution Imaging Spectroradiometer (MODIS) data and growing degree days (GDD; base 5 ºC; ºC-days). The start of growing season (SOS) was first derived from time-series EVI2 (two-band Enhanced Vegetation Index) calculated from a MODIS 8-day composite surface reflectance product (MOD09Q1; Collection 6). Based on GDD calculated from the Daymet gridded estimates of daily weather parameters, a simple model was developed to establish a linkage between the observed seeding date and the SOS. Calibration and validation of the model was conducted on three major crops, spring wheat, canola and oats in the Province of Manitoba, Canada. The estimated SOS had a strong linear correlation with the observed seeding date; with a deviation of a few days depending on the year. The seeding date of the three crops can be calculated from the SOS by adjusting the number of days needed to accumulate GDD (AGDD) for emergence. The overall root-mean-square-difference (RMSD) of the estimated seeding date was less than 10 days. Validation showed that the accuracy of the estimated seeding date was crop-type independent. The developed method is useful for estimating the historical crop seeding date from remote sensing data in Canada to support studies of the interactions among seeding date, crop management and crop yield under climate change. It is anticipated that this method can be adapted to other crops in other locations using the same or different satellite data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study area in the Province of Manitoba, Canada, and the distribution of annual cropland in 2009. The subset figure highlights the location of the study area within Canada.</p>
Full article ">Figure 2
<p>Variability of daily mean temperature (<b>a</b>) and accumulated growing degree days (AGDD, °C-days, base 5 °C) (<b>b</b>) for the years 2006 and 2009.</p>
Full article ">Figure 3
<p>Scatter plot of observed seeding date and remotely sensed Start-Of-Season (SOS) for spring wheat (<b>a</b>,<b>d</b>), canola (<b>b</b>,<b>e</b>), and oats (<b>c</b>,<b>f</b>) in 2006 and 2009; DOY is the day of year. The dashed black, the long dashed black and the solid black line denote the 1:1 line, the linear regression between observed seeding date and SOS in 2006 and 2009, respectively.</p>
Full article ">Figure 3 Cont.
<p>Scatter plot of observed seeding date and remotely sensed Start-Of-Season (SOS) for spring wheat (<b>a</b>,<b>d</b>), canola (<b>b</b>,<b>e</b>), and oats (<b>c</b>,<b>f</b>) in 2006 and 2009; DOY is the day of year. The dashed black, the long dashed black and the solid black line denote the 1:1 line, the linear regression between observed seeding date and SOS in 2006 and 2009, respectively.</p>
Full article ">Figure 4
<p>Scatter plot of observed seeding date and remotely sensed SOS for different years: 2006 (<b>a</b>,<b>c</b>) and 2009 (<b>b</b>,<b>d</b>); DOY is the day of year, and the dashed black line denotes the 1:1 lines.</p>
Full article ">Figure 5
<p>Histogram of AGDD from seeding date to the remotely sensed SOS using the inflection-based (SOS<sub>inflection</sub>) and the threshold-based method (SOS<sub>20</sub>) for spring wheat (<b>a</b>,<b>d</b>), canola (<b>b</b>,<b>e</b>), and oats (<b>c</b>,<b>f</b>) in 2006 and 2009, respectively.</p>
Full article ">Figure 6
<p>Histogram of AGDD from seeding date to the remotely sensed SOS using the inflection-based (SOS<sub>inflection</sub>) and that using the threshold-based method (SOS<sub>20</sub>) for 2006 (<b>a</b>,<b>c</b>) and 2009 (<b>b</b>,<b>d</b>), respectively.</p>
Full article ">Figure 6 Cont.
<p>Histogram of AGDD from seeding date to the remotely sensed SOS using the inflection-based (SOS<sub>inflection</sub>) and that using the threshold-based method (SOS<sub>20</sub>) for 2006 (<b>a</b>,<b>c</b>) and 2009 (<b>b</b>,<b>d</b>), respectively.</p>
Full article ">Figure 7
<p>Comparison between observed and estimated seeding date using the inflection based approach (SOS<sub>inflection)</sub>; the dashed gray lines mark is the 10-day confidence interval, and the dashed black line denotes the 1:1 lines.</p>
Full article ">Figure 8
<p>Comparison between observed and estimated seeding date using the threshold based approach (SOS<sub>20</sub>); the dashed gray lines mark the 10-day confidence interval, and the dashed black line denotes the 1:1 line.</p>
Full article ">Figure 9
<p>Time-series EVI2 for the three crops (spring wheat, canola and oats) in two different years (2006 (<b>a</b>) and 2009 (<b>b</b>)). The two grey vertical lines highlight the period of vegetative stage.</p>
Full article ">Figure A1
<p>An example showing the extraction of SOS<sub>20</sub> and SOS<sub>inflection</sub> from the fitted curve of EVI2. The K’ is the first-order derivative of the curvature of the fitted EVI2 curve (Equation (3)).</p>
Full article ">
18 pages, 6034 KiB  
Article
Spatiotemporal Analysis of MODIS NDVI in the Semi-Arid Region of Kurdistan (Iran)
by Mehdi Gholamnia, Reza Khandan, Stefania Bonafoni and Ali Sadeghi
Remote Sens. 2019, 11(14), 1723; https://doi.org/10.3390/rs11141723 - 20 Jul 2019
Cited by 21 | Viewed by 5464
Abstract
In this study, the spatiotemporal behavior of vegetation cover in the Kurdistan province of Iran was analyzed for the first time by TIMESAT and Breaks for Additive Season and Trend (BFAST) algorithms. They were applied on Normalized Vegetation Index (NDVI) time series from [...] Read more.
In this study, the spatiotemporal behavior of vegetation cover in the Kurdistan province of Iran was analyzed for the first time by TIMESAT and Breaks for Additive Season and Trend (BFAST) algorithms. They were applied on Normalized Vegetation Index (NDVI) time series from 2000 to 2016 derived from Moderate Resolution Imaging Spectroradiometer (MODIS) observations. The TIMESAT software package was used to estimate the seasonal parameters of NDVI and their relation to land covers. BFAST was applied for identifying abrupt changes (breakpoints) of NDVI and their magnitudes. The results from TIMESAT and BFAST were first reported separately, and then interpreted together. TMESAT outcomes showed that the lowest and highest amplitudes of NDVI during the whole time period happened in 2008 and 2010. The spatial distribution of the number of breakpoints showed different behaviors in the west and east of the study area, and the breakpoint frequency confirmed the extreme NDVI amplitudes in 2008 and 2010 found by TIMESAT. For the first time in Iran, a correlation analysis between accumulated precipitations and maximum NDVIs (from one to seven months before the NDVI maximum) was conducted. The results showed that precipitation one month before had a higher correlation with the maximum NDVIs in the region. Overall, the results describe the NDVI behavior in terms of greenness, lifetime, abrupt changes for the different land covers, and across the years, suggesting how the northwest and west of the study area can be more susceptible to drought conditions. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>(<b>A</b>) Mean of annual precipitation from 2000 to 2016 in the Kurdistan province of Iran estimated by Climate Hazards Group InfraRed Precipitation with Station data (CHIRPS), (<b>B</b>) elevation map from ASTER data, (<b>C</b>) mean of annual maximum Normalized Vegetation Index (NDVIs) from 2000 to 2016.</p>
Full article ">Figure 2
<p>Land-cover map in 2015 with eight classes and their coverage percentage [<a href="#B54-remotesensing-11-01723" class="html-bibr">54</a>] in the Kurdistan province. Shrubland and grassland refer to “natural” classes.</p>
Full article ">Figure 3
<p>Normalized Vegetation Index (NDVI) parameters extracted from TIMESAT: amplitude, start of season (SoS), large integral (Linteg), small integral (Sinteg), base value, end of season (EOS), maximum value.</p>
Full article ">Figure 4
<p>(<b>A</b>) TIMESAT modeling of the NDVI observations for a sample point in the study area, including the start and end of the season. (<b>B</b>) Sorted amplitudes of NDVI from TIMESAT, for each year, for all the pixels in the study area.</p>
Full article ">Figure 5
<p>(<b>A</b>) Difference between the NDVI maximum values in 2008 and mean of maximum values (MMV) across 2000–2016, (<b>B</b>) Difference between the NDVI maximum values in 2010 and MMV across 2000–2016, (<b>C</b>) difference between the NDVI base values in 2008 and mean of base values (MBV) across 2000–2016, (<b>D</b>) difference between the NDVI base values in 2010 and MBV across 2000–2016.</p>
Full article ">Figure 6
<p>Error bar plot of TIMESAT seasonal parameters in relation to land covers for the whole time series: (<b>A</b>) maximum value of NDVI, (<b>B</b>) base value of NDVI, (<b>C</b>) day of year (DOY) of middle of season, (<b>D</b>) DOY of start of season, (<b>E</b>) DOY of end of season, and (<b>F</b>) length of season (the plots are sorted based on the mean values).</p>
Full article ">Figure 7
<p>NDVI annual parameters from TIMESAT in the study area. (<b>A</b>) Maximum value, (<b>B</b>) base value, (<b>C</b>) DOY middle of season, and (<b>D</b>) length of season (DOY).</p>
Full article ">Figure 8
<p>(<b>A</b>) Map of the number of breakpoints for the whole NDVI time series (2000–2016), and (<b>B</b>) frequency of the breakpoints for each year.</p>
Full article ">Figure 9
<p>(<b>A</b>) Histograms of magnitudes of breakpoints for the whole NDVI time series, and (<b>B</b>) mean number of breakpoints for each land cover.</p>
Full article ">Figure 10
<p>(<b>A</b>) The error bar plot of positive magnitudes, (<b>B</b>) the error bar plot of negative magnitudes in relation to land covers for the whole time series (the plots are sorted based on the mean of magnitudes).</p>
Full article ">Figure 11
<p>Spatial distribution of (<b>A</b>) maximum of breakpoint magnitudes with positive signs, (<b>B</b>) maximum of breakpoint magnitudes with negative signs for the whole time series 2000–2016.</p>
Full article ">Figure 12
<p>(<b>A</b>) Correlations between the maximum NDVI values and the accumulated precipitations from CHIRPS in the first (M1) to the seventh month (M7) before DOY 175, for all the pixels in the study area. (<b>B</b>) Accumulated monthly precipitations in the first to the seventh month before DOY 175 in each year for the whole area.</p>
Full article ">Figure 13
<p>Standardized Precipitation IndexSPI) maps from 2006 (<b>A</b>) to 2011 (<b>F</b>) showing the wet and dry conditions in the study area.</p>
Full article ">
24 pages, 13581 KiB  
Article
Mapping Paddy Rice Planting Area in Northeastern China Using Spatiotemporal Data Fusion and Phenology-Based Method
by Qi Yin, Maolin Liu, Junyi Cheng, Yinghai Ke and Xiuwan Chen
Remote Sens. 2019, 11(14), 1699; https://doi.org/10.3390/rs11141699 - 18 Jul 2019
Cited by 61 | Viewed by 6764
Abstract
Accurate paddy rice mapping with fine spatial detail is significant for ensuring food security and maintaining sustainable environmental development. In northeastern China, rice is planted in fragmented and patchy fields and its production has reached over 10% of the total amount of rice [...] Read more.
Accurate paddy rice mapping with fine spatial detail is significant for ensuring food security and maintaining sustainable environmental development. In northeastern China, rice is planted in fragmented and patchy fields and its production has reached over 10% of the total amount of rice production in China, which has brought the increasing need for updated paddy rice maps in the region. Existing methods for mapping paddy rice are often based on remote sensing techniques by using optical images. However, it is difficult to obtain high quality time series remote sensing data due to the frequent cloud cover in rice planting area and low temporal sampling frequency of satellite imagery. Therefore, paddy rice maps are often developed using few Landsat or time series MODIS images, which has limited the accuracy of paddy rice mapping. To overcome these limitations, we presented a new strategy by integrating a spatiotemporal fusion algorithm and phenology-based algorithm to map paddy rice fields. First, we applied the spatial and temporal adaptive reflectance fusion model (STARFM) to fuse the Landsat and MODIS data and obtain multi-temporal Landsat-like images. From the fused Landsat-like images and the original Landsat images, we derived time series vegetation indices (VIs) with high temporal and high spatial resolution. Then, the phenology-based algorithm, considering the unique physical features of paddy rice during the flooding and transplanting phases/open-canopy period, was used to map paddy rice fields. In order to prove the effectiveness of the proposed strategy, we compared our results with those from other three classification strategies: (1) phenology-based classification based on original Landsat images only, (2) phenology-based classification based on original MODIS images only and (3) random forest (RF) classification based on both Landsat and Landsat-like images. The validation experiments indicate that our fusion-and phenology-based strategy could improve the overall accuracy of classification by 6.07% (from 92.12% to 98.19%) compared to using Landsat data only, and 8.96% (from 89.23% to 98.19%) compared to using MODIS data, and 4.66% (from93.53% to 98.19%) compared to using the RF algorithm. The results show that our new strategy, by integrating the spatiotemporal fusion algorithm and phenology-based algorithm, can provide an effective and robust approach to map paddy rice fields in regions with limited available images, as well as the areas with patchy and fragmented fields. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area and its digital elevation model (DEM). The DEM is from Shuttle Radar Topography Mission (SRTM) 90 m Digital Elevation Data Version 4, obtained from <a href="http://www.srtm.csi.cgiar.org" target="_blank">http://www.srtm.csi.cgiar.org</a>. Main rivers, the county boundary and Landsat scene extent (path/row 114/27) are highlighted.</p>
Full article ">Figure 2
<p>Availability of time series cloudless Landsat images in China. The annual average observation numbers during 2000–2018 for (<b>a</b>) Landsat 8 and (<b>b</b>) Landsat 7; the total observation numbers in 2018 for (<b>c</b>) Landsat 8 and (<b>d</b>) Landsat 7. The study area is marked with red polygons.</p>
Full article ">Figure 3
<p>Landsat images used in study after preprocessing (NIR, red and green bands) on the (<b>a</b>) April 1st, (<b>b</b>) April 25th, (<b>c</b>) May 19th, (<b>d</b>) May 27th, (<b>e</b>) July 6th, (<b>f</b>) August 7th, (<b>g</b>) September 16th, and <b>h</b>) October 18th, and the cloud cover percentage of each image is shown in the figure.</p>
Full article ">Figure 4
<p>The Spatial-temporal changes of nighttime LST derived from MYD11A2 data in 2018. (<b>a</b>) The first date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 0 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C; (<b>b</b>) the first date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 5 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C; (<b>c</b>) the first date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 10 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C; (<b>d</b>) the end date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 0 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C; (<b>e</b>) the end date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 5 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C; (<b>f</b>) the end date with nighttime LST <math display="inline"><semantics> <mo>≥</mo> </semantics></math> 10 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C.</p>
Full article ">Figure 5
<p>Spatial distribution of field photos in the study area is shown in the left panel and part of the validation area of interest (AOIs) in the right panel.</p>
Full article ">Figure 6
<p>Overview of the methodology for fusion-and phenology-based paddy rice mapping using the Landsat data, MODIS data and other ancillary data.</p>
Full article ">Figure 7
<p>The details of images (MODIS and Landsat) used in the study, including the data used for the STARFM and phenology-based algorithm.</p>
Full article ">Figure 8
<p>The temporal profile of paddy rice VIs (NDVI, EVI, and LSWI) at the AOIs, the nighttime LST, and the crop calendar are shown in the figure. The nighttime LST first over 0 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C, 5 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C, 10 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C are marked.</p>
Full article ">Figure 9
<p>The seasonal dynamics of NDVI, EVI and LSWI for the major land cover types at select sites, extracted from the fused time series data. (<b>a</b>) Forest, (<b>b</b>) paddy, (<b>c</b>)water, (<b>d</b>)upland crops, (<b>e</b>)built-up land, and (<b>f</b>) wetland. All sites were selected according to field photos and high spatial resolution Google Earth images.</p>
Full article ">Figure 10
<p>Threshold selection for mask, (<b>a</b>) sparse vegetation mask, and the curves shows the maximum EVI values of different land covers during nighttime LST &gt;5 <math display="inline"><semantics> <mo>°</mo> </semantics></math>C, (<b>b</b>) natural vegetation mask, and the curve shows the maximum values of different land cover types before mid-to late May.</p>
Full article ">Figure 11
<p>Scatter density plots of the real VIs and the predicted ones by the STARFM in (<b>a</b>) Prediction Test 1, (<b>b</b>) Prediction Test 2, (<b>c</b>) Prediction Test 3.</p>
Full article ">Figure 12
<p>Example of VIs temporal profiles from the fused time series data. The top and bottom rows show the fused images and the real Landsat images from different times used in the study, respectively (site: 132.32°N, 47.31°E).</p>
Full article ">Figure 13
<p>The comparison of four paddy rice maps: (<b>a</b>) Landsat image (false color composite, include red band, green band and blue band); (<b>c</b>) fusion-based paddy rice map, and (<b>e</b>) Landsat-based paddy rice map; (<b>g</b>) MODIS-based paddy rice map; (<b>i</b>) RF-based paddy rice map. (<b>b</b>,<b>d</b>,<b>f</b>,<b>h</b>,<b>j</b>) display the spatial details of (<b>a</b>,<b>c</b>,<b>e</b>,<b>g</b>,<b>i</b>) respectively within the black box.</p>
Full article ">Figure 14
<p>Contribution of different time windows and images to the classification. (<b>a</b>) Contribution of each month and the entire transplanting and flooding period to the classification through evaluating its feature importance. The top crucial features were colored in dark brown, while the least decisive features were shown in dark green. (<b>b</b>) Contribution of each image to the classification by calculating the average score of its feature importance. The rankings of importance for all images (in descending order) are marked.</p>
Full article ">Figure 15
<p>The temporal profiles of paddy rice vegetation indices (NDVI, EVI, and LSWI) extracted from the fused time series data (<b>left column</b>) and MODIS (<b>right column</b>) respectively, at (<b>a</b>,<b>b</b>) a paddy rice site (132.27°N, 48.00°E), (<b>c</b>,<b>d</b>) a paddy rice site (133.03°N, 47.33°E), and (<b>e</b>,<b>f</b>) a paddy rice site (132.32°N, 47.31°E). The locations and the dates of the field photos taken were marked in the photos. The VIs values obtained from fused images were marked with black dots, and the VIs values obtained from MODIS images were marked with hollow triangles.</p>
Full article ">
21 pages, 5210 KiB  
Article
Evaluation and Comparison of Random Forest and A-LSTM Networks for Large-scale Winter Wheat Identification
by Tianle He, Chuanjie Xie, Qingsheng Liu, Shiying Guan and Gaohuan Liu
Remote Sens. 2019, 11(14), 1665; https://doi.org/10.3390/rs11141665 - 12 Jul 2019
Cited by 49 | Viewed by 5411
Abstract
Machine learning comprises a group of powerful state-of-the-art techniques for land cover classification and cropland identification. In this paper, we proposed and evaluated two models based on random forest (RF) and attention-based long short-term memory (A-LSTM) networks that can learn directly from the [...] Read more.
Machine learning comprises a group of powerful state-of-the-art techniques for land cover classification and cropland identification. In this paper, we proposed and evaluated two models based on random forest (RF) and attention-based long short-term memory (A-LSTM) networks that can learn directly from the raw surface reflectance of remote sensing (RS) images for large-scale winter wheat identification in Huanghuaihai Region (North-Central China). We used a time series of Moderate Resolution Imaging Spectroradiometer (MODIS) images over one growing season and the corresponding winter wheat distribution map for the experiments. Each training sample was derived from the raw surface reflectance of MODIS time-series images. Both models achieved state-of-the-art performance in identifying winter wheat, and the F1 scores of RF and A-LSTM were 0.72 and 0.71, respectively. We also analyzed the impact of the pixel-mixing effect. Training with pure-mixed-pixel samples (the training set consists of pure and mixed cells and thus retains the original distribution of data) was more precise than training with only pure-pixel samples (the entire pixel area belongs to one class). We also analyzed the variable importance along the temporal series, and the data acquired in March or April contributed more than the data acquired at other times. Both models could predict winter wheat coverage in past years or in other regions with similar winter wheat growing seasons. The experiments in this paper showed the effectiveness and significance of our methods. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Two images of the same parcel acquired in March 2018 (<b>a</b>) and June 2018 (<b>b</b>) from the Planet Explorer API. In March, winter wheat enters the green-returning stage, while most of the other vegetation is still not germinated. In June, winter wheat enters maturation stage and turns yellow, while other vegetation is green. Therefore, it is easy to distinguish wheat areas and other land cover types using images acquired in the two stages via visual interpretation.</p>
Full article ">Figure 2 Cont.
<p>Two images of the same parcel acquired in March 2018 (<b>a</b>) and June 2018 (<b>b</b>) from the Planet Explorer API. In March, winter wheat enters the green-returning stage, while most of the other vegetation is still not germinated. In June, winter wheat enters maturation stage and turns yellow, while other vegetation is green. Therefore, it is easy to distinguish wheat areas and other land cover types using images acquired in the two stages via visual interpretation.</p>
Full article ">Figure 3
<p>Geographical partitioning strategy. The whole region was partitioned into eight regions from north to south.</p>
Full article ">Figure 4
<p>(<b>a</b>) The simple recurrent neural network (RNN) architecture, (<b>b</b>) the attention-based long short-term memory (A-LSTM) architecture, (<b>c</b>) the unrolled chain-type RNN structure.</p>
Full article ">Figure 4 Cont.
<p>(<b>a</b>) The simple recurrent neural network (RNN) architecture, (<b>b</b>) the attention-based long short-term memory (A-LSTM) architecture, (<b>c</b>) the unrolled chain-type RNN structure.</p>
Full article ">Figure 5
<p>Overview of the A-LSTM architecture. The attention mechanism looks at the complete encoded sequence to determine the encoded steps to weigh highly and generates a context vector. The decoder network uses these context vectors to make a prediction.</p>
Full article ">Figure 6
<p>Fine-tuning of the parameters n and k of the random forest (RF) model, (<b>a</b>) the overall accuracy of the RF trained on <span class="html-italic">pure-mixed pixel set</span>, (<b>b</b>) the overall accuracy of the RF trained on <span class="html-italic">Pure pixel set</span>. When n equals 500 and k equals 40, the performances of RF models converge; thus, we selected this combination for further experiments.</p>
Full article ">Figure 7
<p>Training logs for the A-LSTM model, (<b>a</b>) model trained on a mixed dataset, (<b>b</b>) model trained on a pure dataset.</p>
Full article ">Figure 8
<p>(<b>a</b>) The prediction map informed by the RF model, (<b>b</b>) the prediction map informed by the A-LSTM model, (<b>c</b>) ground truth winter wheat distribution map.</p>
Full article ">Figure 9
<p>(<b>a</b>) Importance scores of 132 variables with the hierarchy indexed by timestep and band, (<b>b</b>) Accumulated importance score per timestep, (<b>c</b>) Accumulated importance score per band.</p>
Full article ">Figure 10
<p>The mean probability distribution that the target class is aligned with the encoded sequence.</p>
Full article ">Figure 11
<p>The winter wheat growing season in the Huanghuaihai Region, (<b>a</b>) sowing; (<b>b</b>) overwintering; (<b>c</b>) green-returning; (<b>d</b>) jointing; (<b>e</b>) flowering; (<b>f</b>) maturation. The cell values of the raster map represent the day of the year on which winter wheat enters the growing stage. Contour lines are shown on the map. The first two maps show the growing times in 2017, and the others show the growing times in 2018.</p>
Full article ">Figure 12
<p>Historical winter wheat distribution map (2016–2017 growing season) informed by (<b>a</b>) RF and (<b>b</b>) A-LSTM, (<b>c</b>) the distribution of manually collected evaluation samples from the Planet Explorer API.</p>
Full article ">
22 pages, 2736 KiB  
Article
Relationship of Abrupt Vegetation Change to Climate Change and Ecological Engineering with Multi-Timescale Analysis in the Karst Region, Southwest China
by Xiaojuan Xu, Huiyu Liu, Zhenshan Lin, Fusheng Jiao and Haibo Gong
Remote Sens. 2019, 11(13), 1564; https://doi.org/10.3390/rs11131564 - 2 Jul 2019
Cited by 43 | Viewed by 3981
Abstract
Vegetation is known to be sensitive to both climate change and anthropogenic disturbance in the karst region. However, the relationship between an abrupt change in vegetation and its driving factors is unclear at multiple timescales. Based on the non-parametric Mann-Kendall test and the [...] Read more.
Vegetation is known to be sensitive to both climate change and anthropogenic disturbance in the karst region. However, the relationship between an abrupt change in vegetation and its driving factors is unclear at multiple timescales. Based on the non-parametric Mann-Kendall test and the ensemble empirical mode decomposition (EEMD) method, the abrupt changes in vegetation and its possible relationships with the driving factors in the karst region of southwest China during 1982–2015 are revealed at multiple timescales. The results showed that: (1) the Normalized Difference Vegetation Index (NDVI) showed an overall increasing trend and had an abrupt change in 2001. After the abrupt change, the greening trend of the NDVI in the east and the browning trend in the west, both changed from insignificant to significant. (2) After the abrupt change, at the 2.5-year time scale, the correlation between the NDVI and temperature changed from insignificantly negative to significantly negative in the west. Over the long-term trend, it changed from significantly negative to significantly positive in the east, but changed from significantly positive to significantly negative in the west. The abrupt change primarily occurred on the long-term trend. (3) After the abrupt change, 1143.32 km2 farmland was converted to forests in the east, and the forest area had significantly increased. (4) At the 2.5-year time scale, the abrupt change in the relationships between the NDVI and climate factors was primarily driven by climate change in the west, especially rising temperatures. Over the long-term trend, it was caused by ecological protection projects in the east, but by rising temperatures in the west. The integration of the abrupt change analysis and multiple timescale analysis help assess the relationship of vegetation changes with climate changes and human activities accurately and comprehensively, and deepen our understanding of the driving mechanism of vegetation changes, which will further provide scientific references for the protection of fragile ecosystems in the karst region. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area; and the spatial distribution of temperature, precipitation, vegetation types, elevation, and meteorological stations in southwestern China.</p>
Full article ">Figure 2
<p>A workflow of the research method process.</p>
Full article ">Figure 3
<p>The breakpoint and the spatial distribution of growing-season NDVI in the karst region of southwest China; (<b>a</b>) the abrupt change of NDVI, (<b>b</b>) the spatial distribution of mean NDVI from 1982 to 2015.</p>
Full article ">Figure 4
<p>The linear change rate and the significance level of the trend in the growing season NDVI. (<b>a</b>–<b>c</b>) show the linear change rate of the NDVI trend in the entire period, before the breakpoint and after the breakpoint. (<b>d</b>–<b>f</b>) show the significance level of the NDVI trend during the entire period, before the breakpoint and after the breakpoint. The insets show the frequency distribution of the corresponding values.</p>
Full article ">Figure 5
<p>Significant correlation between temperature and NDVI in 1982–2015 (<b>a</b>), before the breakpoint (<b>b</b>), after the breakpoint (<b>c</b>); correlation between precipitation and NDVI in 1982–2015 (<b>d</b>), before the breakpoint (<b>e</b>), after the breakpoint (<b>f</b>). The insets show the frequency distribution of corresponding values.</p>
Full article ">Figure 6
<p>EEMD decomposition for (<b>a</b>) Average temporal changes in the growing-season NDVI, (<b>b</b>) temperature, and (<b>c</b>) precipitation during 1982–2015 in the karst region of Southwest China. IMF1-IMF4 components and residual represent different time scales and long term trend, respectively.</p>
Full article ">Figure 6 Cont.
<p>EEMD decomposition for (<b>a</b>) Average temporal changes in the growing-season NDVI, (<b>b</b>) temperature, and (<b>c</b>) precipitation during 1982–2015 in the karst region of Southwest China. IMF1-IMF4 components and residual represent different time scales and long term trend, respectively.</p>
Full article ">Figure 7
<p>Significant correlation between climate changes and the NDVI at the 2.5-year time scale in the karst region of southwest China: (<b>a</b>) correlation between temperature and NDVI from 1982–2015, (<b>b</b>) before the breakpoint; (<b>c</b>) after the breakpoint; (<b>d</b>) significant correlation between precipitation and NDVI from 1982–2015, (<b>e</b>) before the breakpoint, (<b>f</b>) after the breakpoint; (r: correlation coefficient; p: significance level). The insets show the frequency distribution of the corresponding values.</p>
Full article ">Figure 8
<p>Significant correlation between climate changes and the NDVI over the long-term trend in the karst region of southwest China: (<b>a</b>) correlation between temperature and NDVI from 1982–2015, (<b>b</b>) before the breakpoint, (<b>c</b>) after the breakpoint; (<b>d</b>) significant correlation between precipitation and NDVI from 1982–2015, (<b>e</b>) before the breakpoint, (<b>f</b>) after the breakpoint; (r: correlation coefficient; p: significance level). The insets show the frequency distribution of the corresponding values.</p>
Full article ">
16 pages, 49412 KiB  
Article
Comparison of Vegetation Indices Derived from UAV Data for Differentiation of Tillage Effects in Agriculture
by Junho Yeom, Jinha Jung, Anjin Chang, Akash Ashapure, Murilo Maeda, Andrea Maeda and Juan Landivar
Remote Sens. 2019, 11(13), 1548; https://doi.org/10.3390/rs11131548 - 29 Jun 2019
Cited by 79 | Viewed by 13408
Abstract
Unmanned aerial vehicle (UAV) platforms with sensors covering the red-edge and near-infrared (NIR) bands to measure vegetation indices (VIs) have been recently introduced in agriculture research. Consequently, VIs originally developed for traditional airborne and spaceborne sensors have become applicable to UAV systems. In [...] Read more.
Unmanned aerial vehicle (UAV) platforms with sensors covering the red-edge and near-infrared (NIR) bands to measure vegetation indices (VIs) have been recently introduced in agriculture research. Consequently, VIs originally developed for traditional airborne and spaceborne sensors have become applicable to UAV systems. In this study, we investigated the difference in tillage treatments for cotton and sorghum using various RGB and NIR VIs. Minimized tillage has been known to increase farm sustainability and potentially optimize productivity over time; however, repeated tillage is the most commonly-adopted management practice in agriculture. To this day, quantitative comparisons of plant growth patterns between conventional tillage (CT) and no tillage (NT) fields are often inconsistent. In this study, high-resolution and multi-temporal UAV data were used for the analysis of tillage effects on plant health and the performance of various vegetation indices investigated. Time series data over ten dates were acquired on a weekly basis by RGB and multispectral (MS) UAV platforms: a DJI Phantom 4 Pro and a DJI Matrice 100 with the SlantRange 3p sensor. Ground reflectance panels and an ambient illumination sensor were used for the radiometric calibration of RGB and MS orthomosaic images, respectively. Various RGB and NIR-based vegetation indices were then calculated for the comparison between CT and NT treatments. In addition, a one-tailed Z-test was conducted to check the significance of VIs’ difference between CT and NT treatments. The results showed distinct differences in VIs between tillage treatments during the whole growing season. NIR-based VIs showed better discrimination performance than RGB-based VIs. Out of 13 VIs, the modified soil adjusted vegetation index (MSAVI) and optimized soil adjusted vegetation index (OSAVI) showed better performance in terms of quantitative difference measurements and the Z-test between tillage treatments. The modified green red vegetation index (MGRVI) and excess green (ExG) showed reliable separability and can be an alternative for economic RGB UAV application. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>RGB orthomosaic image of the study area on 7 June 2017, with WGS 84 UTM 14N map coordinates (red polygons: cotton, yellow polygons: sorghum). CT, conventional tillage; NT, no tillage.</p>
Full article ">Figure 2
<p>Reflectance panels (black, dark gray, light gray, and white) and a ground GCP target.</p>
Full article ">Figure 3
<p>Radiometric calibration model for the RGB image in the blue band acquired on 20 May.</p>
Full article ">Figure 4
<p>Radiometrically-calibrated RGB orthomosaic image acquired on 20 May.</p>
Full article ">Figure 5
<p>Radiometrically-calibrated MS orthomosaic image acquired on 20 May (NIR-red-green color composition).</p>
Full article ">Figure 6
<p>Time series MGRVI images.</p>
Full article ">Figure 7
<p>Time series MSAVI images.</p>
Full article ">Figure 7 Cont.
<p>Time series MSAVI images.</p>
Full article ">Figure 8
<p>Enlarged (<b>a</b>) MGRVI and (<b>b</b>) MSAVI images of the sorghum field on 19 June 2017.</p>
Full article ">Figure 9
<p>Time series RGB VIs’ plots.</p>
Full article ">Figure 10
<p>Time series NIR VIs’ plots.</p>
Full article ">
18 pages, 5001 KiB  
Article
Evaluation of Vegetation Biophysical Variables Time Series Derived from Synthetic Sentinel-2 Images
by Najib Djamai, Detang Zhong, Richard Fernandes and Fuqun Zhou
Remote Sens. 2019, 11(13), 1547; https://doi.org/10.3390/rs11131547 - 29 Jun 2019
Cited by 17 | Viewed by 4518
Abstract
Time series of vegetation biophysical variables (leaf area index (LAI), fraction canopy cover (FCOVER), fraction of absorbed photosynthetically active radiation (FAPAR), canopy chlorophyll content (CCC), and canopy water content (CWC)) were estimated from interpolated Sentinel-2 (S2-LIKE) surface reflectance images, for an agricultural region [...] Read more.
Time series of vegetation biophysical variables (leaf area index (LAI), fraction canopy cover (FCOVER), fraction of absorbed photosynthetically active radiation (FAPAR), canopy chlorophyll content (CCC), and canopy water content (CWC)) were estimated from interpolated Sentinel-2 (S2-LIKE) surface reflectance images, for an agricultural region located in central Canada, using the Simplified Level 2 Product Prototype Processor (SL2P). S2-LIKE surface reflectance data were generated by blending clear-sky Sentinel-2 Multispectral Imager (S2-MSI) images with daily BRDF-adjusted Moderate Resolution Imaging Spectrometer images using the Prediction Smooth Reflectance Fusion Model (PSFRM), and validated using thirteen independent S2-MSI images (RMSE 6%). The uncertainty of S2-LIKE surface reflectance data increases with the time delay between the prediction date and the closest S2-MSI image used for training PSFRM. Vegetation biophysical variables from S2-LIKE products are validated qualitatively and quantitatively by comparison to the corresponding vegetation biophysical variables from S2-MSI products (RMSE = 0.55 for LAI, ~10% for FCOVER and FAPAR, and 0.13 g/m2 for CCC and 0.16 kg/m2 for CWC). Uncertainties of vegetation biophysical variables derived from S2-LIKE products are almost linearly related to the uncertainty of the input reflectance data. When compared to the in situ measurements collected during the Soil Moisture Active Passive Validation Experiment 2016 field campaign, uncertainties of LAI (0.83) and FCOVER (13.73%) estimates from S2-LIKE products were slightly larger than uncertainties of LAI (0.57) and FCOVER (11.80%) estimates from S2-MSI products. However, equal uncertainties (0.32 kg/m2) were obtained for CWC estimates using SL2P with either S2-LIKE or S2-MSI input data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study site and sampled fields (Basemap: Land Cover of Canada [<a href="#B29-remotesensing-11-01547" class="html-bibr">29</a>]).</p>
Full article ">Figure 2
<p>Surface reflectance data from S2-LIKE product compared to the corresponding surface reflectance data from S2-MSI product (# ~11 × 10<sup>7</sup> samples).</p>
Full article ">Figure 3
<p>RMSE between surface reflectance datasets from S2-LIKE and S2-MSI as function of the temporal interval to clear-sky S2-MSI image used for training (<math display="inline"><semantics> <mrow> <mi>M</mi> <mi>i</mi> <mi>n</mi> <mo>_</mo> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>): points (error bars) present the mean (standard deviation) of RMSE values obtained for spectral band predictions with equal <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>i</mi> <mi>n</mi> <mo>_</mo> <mo>Δ</mo> <mi>T</mi> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Example of LAI maps derived from S2-MSI (<b>a</b>) and S2-LIKE (<b>b</b>) images on DOY 175 (<b>b</b>), as well as the corresponding LAI prediction error map (<b>c</b>) and histogram (<b>d</b>).</p>
Full article ">Figure 5
<p>Vegetation biophysical variables derived from S2-LIKE product compared to the corresponding vegetation biophysical variables derived from S2-MSI (# ~11 × 10<sup>7</sup> samples).</p>
Full article ">Figure 6
<p>RMSE between pairs of vegetation biophysical variables derived from S2-LIKE and S2-MSI against the average of RMSE values obtained from the different spectral bands for the corresponding surface reflectance datasets.</p>
Full article ">Figure 7
<p>Vegetation biophysical variables derived using SL2P from S2-LIKE (<b>top</b>) and S2-MSI (<b>bottom</b>) images compared to in situ data.</p>
Full article ">Figure 8
<p>Time series of (<b>a</b>) surface reflectance (SR) data (RED and NIR bands), (<b>b</b>) LAI, (<b>c</b>) FAPAR, (<b>d</b>) FCOVER, (<b>e</b>) CWC, and (<b>f</b>) CCC from S2-LIKE images, S2-MSI images, and in situ measurements (for LAI, FCOVER, and CWC).</p>
Full article ">
25 pages, 4707 KiB  
Article
UAV and Ground Image-Based Phenotyping: A Proof of Concept with Durum Wheat
by Adrian Gracia-Romero, Shawn C. Kefauver, Jose A. Fernandez-Gallego, Omar Vergara-Díaz, María Teresa Nieto-Taladriz and José L. Araus
Remote Sens. 2019, 11(10), 1244; https://doi.org/10.3390/rs11101244 - 25 May 2019
Cited by 78 | Viewed by 8596
Abstract
Climate change is one of the primary culprits behind the restraint in the increase of cereal crop yields. In order to address its effects, effort has been focused on understanding the interaction between genotypic performance and the environment. Recent advances in unmanned aerial [...] Read more.
Climate change is one of the primary culprits behind the restraint in the increase of cereal crop yields. In order to address its effects, effort has been focused on understanding the interaction between genotypic performance and the environment. Recent advances in unmanned aerial vehicles (UAV) have enabled the assembly of imaging sensors into precision aerial phenotyping platforms, so that a large number of plots can be screened effectively and rapidly. However, ground evaluations may still be an alternative in terms of cost and resolution. We compared the performance of red–green–blue (RGB), multispectral, and thermal data of individual plots captured from the ground and taken from a UAV, to assess genotypic differences in yield. Our results showed that crop vigor, together with the quantity and duration of green biomass that contributed to grain filling, were critical phenotypic traits for the selection of germplasm that is better adapted to present and future Mediterranean conditions. In this sense, the use of RGB images is presented as a powerful and low-cost approach for assessing crop performance. For example, broad sense heritability for some RGB indices was clearly higher than that of grain yield in the support irrigation (four times), rainfed (by 50%), and late planting (10%). Moreover, there wasn’t any significant effect from platform proximity (distance between the sensor and crop canopy) on the vegetation indexes, and both ground and aerial measurements performed similarly in assessing yield. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Cumulative monthly rainfall (blue line) and maximum, minimum, and mean temperature (bars) in Colmenar de Oreja for the 2016–2017 crop cycle.</p>
Full article ">Figure 2
<p>Red–green–blue (RGB) (<b>A</b>), false-color normalized difference vegetation index (NDVI) (<b>B</b>), and false-color thermal (<b>C</b>) orthomosaic examples corresponding to the late-planting trial during the heading stage at the third sampling visit. Both the multispectral and thermal mosaics have been given false colors: in the former, low NDVI values have been colored red and high values colored green, in the latter, warmer temperature values have been colored red and the colder values colored blue.</p>
Full article ">Figure 3
<p>Pearson correlation coefficient heatmap of grain yield with parameters measured from ground and aerial platforms throughout the different phenological stages and treatments. Correlations are scaled according to the key above. Correlations were studied across the 72 plots from each growing condition.</p>
Full article ">Figure 4
<p>Relationships between grain yield with the RGB index green area (GA) (left), the multispectral index NDVI (middle) and the canopy temperature (right), measured from the ground level (red points) and from the aerial level (blue points) during grain filling for the supplementary irrigation (top), the rainfed (middle), and the late-planting (bottom) growing conditions. Correlations were studied across the 72 plots from each growing condition.</p>
Full article ">Figure 5
<p>Bar graph comparison of the grain yield and the H<sup>2</sup> (orange) and H<sup>2</sup> × r<sub>g</sub> (blue) indexes of a selection of indexes.</p>
Full article ">
22 pages, 6698 KiB  
Article
Winter Wheat Canopy Height Extraction from UAV-Based Point Cloud Data with a Moving Cuboid Filter
by Yang Song and Jinfei Wang
Remote Sens. 2019, 11(10), 1239; https://doi.org/10.3390/rs11101239 - 24 May 2019
Cited by 42 | Viewed by 6214
Abstract
Plant height can be used as an indicator to estimate crop phenology and biomass. The Unmanned Aerial Vehicle (UAV)-based point cloud data derived from photogrammetry methods contains the structural information of crops which could be used to retrieve crop height. However, removing noise [...] Read more.
Plant height can be used as an indicator to estimate crop phenology and biomass. The Unmanned Aerial Vehicle (UAV)-based point cloud data derived from photogrammetry methods contains the structural information of crops which could be used to retrieve crop height. However, removing noise and outliers from the UAV-based crop point cloud data for height extraction is challenging. The objective of this paper is to develop an alternative method for canopy height determination from UAV-based 3D point cloud datasets using a statistical analysis method and a moving cuboid filter to remove outliers. In this method, first, the point cloud data is divided into many 3D columns. Secondly, a moving cuboid filter is applied in each column and moved downward to eliminate noise points. The threshold of point numbers in the filter is calculated based on the distribution of points in the column. After applying the moving cuboid filter, the crop height is calculated from the highest and lowest points in each 3D column. The proposed method achieved high accuracy for height extraction with low Root Mean Square Error (RMSE) of 6.37 cm and Mean Absolute Error (MAE) of 5.07 cm. The canopy height monitoring window for winter wheat using this method starts from the beginning of the stem extension stage to the end of the heading stage (BBCH 31 to 65). Since the height of wheat has limited change after the heading stage, this method could be used to retrieve the crop height of winter wheat. In addition, this method only requires one operation of UAV in the field. It could be an effective method that can be widely used to help end-user to monitor their crops and support real-time decision making for farm management. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area and sampling points in the field. (<b>a</b>) The study area in Ontario, Canada. (<b>b</b>) The sampling points in the study area. The blue points are the ground control points, and the black squares are ground-measured sampling points.</p>
Full article ">Figure 2
<p>2D UAV orthomosaic images for the study area during three growth stages, (<b>a</b>) 16 May, (<b>c</b>) 31 May, (<b>e</b>) 9 June; 3D Point cloud dataset for the black boundary area in perspective view, (<b>b</b>) 16 May, (<b>d</b>) 31 May, (<b>f</b>) 9 June. The color scheme bar showed the elevation (above sea level) of the point cloud dataset.</p>
Full article ">Figure 2 Cont.
<p>2D UAV orthomosaic images for the study area during three growth stages, (<b>a</b>) 16 May, (<b>c</b>) 31 May, (<b>e</b>) 9 June; 3D Point cloud dataset for the black boundary area in perspective view, (<b>b</b>) 16 May, (<b>d</b>) 31 May, (<b>f</b>) 9 June. The color scheme bar showed the elevation (above sea level) of the point cloud dataset.</p>
Full article ">Figure 3
<p>Individual 3D square cross-section column within the point cloud data set.</p>
Full article ">Figure 4
<p>Histograms of the point distribution of a typical 3D column in the crop field at different crop growth stages. The distribution of overall points, bare ground points, and plant points are represented by black, brown, and green bars. X-axis is the elevation of points and Y-axis is the frequency of points. (<b>a</b>) The histogram of points distribution for bare ground points in October 2015. (<b>b</b>,<b>c</b>) The histogram of points distribution in the early growth stage of winter wheat (BBCH ≈ 31) on 16 May 2016. (<b>d</b>,<b>e</b>) The histogram of points distribution in the middle growth stage of winter wheat (BBCH ≈ 65) on 31 May 2016. (<b>f</b>,<b>g</b>) The histogram of points distribution in the late growth stage of winter wheat (BBCH ≈ 83) on 9 June 2016.</p>
Full article ">Figure 5
<p>The principle of the moving cuboid filter in a single column. The orange cuboid is the moving cuboid filter. It starts from Step 1 and moves down one slice in Step 2. <math display="inline"><semantics> <mi>i</mi> </semantics></math> is the number of steps in the 3D column, and Step <math display="inline"><semantics> <mrow> <mi>j</mi> </mrow> </semantics></math> is the final step.</p>
Full article ">Figure 6
<p>Flow chart of the moving cuboid filter.</p>
Full article ">Figure 7
<p>Threshold <math display="inline"><semantics> <mrow> <msub> <mi>T</mi> <mi>α</mi> </msub> </mrow> </semantics></math> determination using the relationship between the ratio (<math display="inline"><semantics> <mi>α</mi> </semantics></math>) and optimal mean threshold (<math display="inline"><semantics> <mi>T</mi> </semantics></math>). (<b>a</b>) the relationship between <math display="inline"><semantics> <mi>α</mi> </semantics></math> and <math display="inline"><semantics> <mi>T</mi> </semantics></math>; (<b>b</b>) classification of <math display="inline"><semantics> <mi>α</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Raw maps of the winter wheat canopy height displayed as a cubic convolution interpretation. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June.</p>
Full article ">Figure 9
<p>Map of the unsolved pixels (red points) at different growing stages for winter wheat. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June.</p>
Full article ">Figure 10
<p>The final maps of canopy height in the study area at different growing stages. (<b>a</b>) 16 May; (<b>b</b>) 31 May; (<b>c</b>) 9 June. After removal of the unsolved pixels, the final map was generated using the inverse distance weighted (IDW) interpretation method and displayed as cubic convolution resampling. The black dash rectangle showed an area with a higher height estimation on the crop height map.</p>
Full article ">Figure 11
<p>The winter wheat canopy height produced by Khanna’s method. (<b>a</b>) Canopy height map on 16 May. (<b>b</b>) Canopy height map on 31 May. (<b>c</b>) canopy height map with unsolved pixels on 16 May. (<b>d</b>) Canopy height map with unsolved pixels on 31 May.</p>
Full article ">Figure 12
<p>The relationship between the threshold and estimated crop canopy height for one sampling point.</p>
Full article ">Figure 13
<p>The results after applying the proposed moving cuboid filter with different thresholds; the red points represent outliers and the green points are the points that are kept after filtering. (<b>a</b>,<b>b</b>) threshold of 7.4%; (<b>c</b>,<b>d</b>) threshold of 7.5%.</p>
Full article ">
24 pages, 4031 KiB  
Article
Evaluating the Potential of Multi-Seasonal CBERS-04 Imagery for Mapping the Quasi-Circular Vegetation Patches in the Yellow River Delta Using Random Forest
by Qingsheng Liu, Hongwei Song, Gaohuan Liu, Chong Huang and He Li
Remote Sens. 2019, 11(10), 1216; https://doi.org/10.3390/rs11101216 - 22 May 2019
Cited by 18 | Viewed by 3541
Abstract
High-resolution satellite imagery enables decametric-scale quasi-circular vegetation patch (QVP) mapping, which greatly aids the monitoring of vegetation restoration projects and the development of theories in pattern evolution and maintenance research. This study analyzed the potential of employing five seasonal fused 5 m spatial [...] Read more.
High-resolution satellite imagery enables decametric-scale quasi-circular vegetation patch (QVP) mapping, which greatly aids the monitoring of vegetation restoration projects and the development of theories in pattern evolution and maintenance research. This study analyzed the potential of employing five seasonal fused 5 m spatial resolution CBERS-04 satellite images to map QVPs in the Yellow River Delta, China, using the Random Forest (RF) classifier. The classification accuracies corresponding to individual and multi-season combined images were compared to understand the seasonal effect and the importance of optimal image timing and acquisition frequency for QVP mapping. For classification based on single season imagery, the early spring March imagery, with an overall accuracy (OA) of 98.1%, was proven to be more adequate than the other four individual seasonal images. The early spring (March) and winter (December) combined dataset produced the most accurate QVP detection results, with a precision rate of 66.3%, a recall rate of 43.9%, and an F measure of 0.528. For larger study areas, the gain in accuracy should be balanced against the increase in processing time and space when including the derived spectral indices in the RF classification model. Future research should focus on applying higher resolution imagery to QVP mapping. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Map showing the location of the study area in Dongying City, Shandong Province, China (photograph taken on 30 May 2018); (<b>b</b>) Subset of the CBERS-04 fusion imagery (18 May 2017) acquired over the study area shown as a false color RGB composite consisting of R-Near infrared (band 4), G-Red (band 3), and B-Green (band 2).</p>
Full article ">Figure 2
<p>Predictive variable importance for the five individual season data.</p>
Full article ">Figure 3
<p>Overall classification accuracies and Kappa coefficients for land cover classification from five single seasonal images using the five predictive variable selection datasets listed in <a href="#remotesensing-11-01216-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 4
<p>Precision rate, recall rate, and F measure values for the QVP detection from five singe seasonal images using the five predictive variable selection datasets listed in <a href="#remotesensing-11-01216-t004" class="html-table">Table 4</a>.</p>
Full article ">Figure 5
<p>Predictive variable importance for the ten two-season combined datasets. (<b>a</b>) March + May; (<b>b</b>)March + July; (<b>c</b>) March + October; (<b>d</b>) March + December; (<b>e</b>) May + July; (<b>f</b>) May + October; (<b>g</b>) May + December; (<b>h</b>) July + October; (<b>j</b>) July + December; (<b>k</b>) October + December.</p>
Full article ">Figure 5 Cont.
<p>Predictive variable importance for the ten two-season combined datasets. (<b>a</b>) March + May; (<b>b</b>)March + July; (<b>c</b>) March + October; (<b>d</b>) March + December; (<b>e</b>) May + July; (<b>f</b>) May + October; (<b>g</b>) May + December; (<b>h</b>) July + October; (<b>j</b>) July + December; (<b>k</b>) October + December.</p>
Full article ">Figure 6
<p>Overall classification accuracies and Kappa coefficient values for land cover classification from the ten combined two-season images using the ten predictive variable selection datasets listed in <a href="#remotesensing-11-01216-t005" class="html-table">Table 5</a>.</p>
Full article ">Figure 7
<p>Precision rate, recall rate, and F measure values for the detection of QVPs from the ten combined two-season images using ten predictive variable selection datasets listed in <a href="#remotesensing-11-01216-t006" class="html-table">Table 6</a>.</p>
Full article ">Figure 8
<p>Overall classification accuracies and Kappa coefficients for land cover classification from the ten combined three-season datasets.</p>
Full article ">Figure 9
<p>Precision rate, recall rate, and F measure values for the QVP detection from the ten combined three-season datasets.</p>
Full article ">Figure 10
<p>Overall classification accuracies and Kappa coefficients for land cover classification from five combined four-season images.</p>
Full article ">Figure 11
<p>Precision rate, recall rate, and F measure values for the QVP detection from the five combined four-season datasets.</p>
Full article ">Figure 12
<p>(<b>a</b>) Classification result of the QVPs, bare soil, and water area using the random forest classifier with the March–December combined images; (<b>b</b>) The final detected QVPs of <a href="#remotesensing-11-01216-f010" class="html-fig">Figure 10</a>a using the thresholds of area (less than 3000 m<sup>2</sup>) and perimeter/area (less than 0.54).</p>
Full article ">Figure 13
<p>Part of the CBERS-04 fusion false color images (RGB432). (<b>a</b>) 27 March 2017; (<b>b</b>) 18 May 2017; (<b>c</b>) 10 July 2016; (<b>d</b>) 24 October 2015; (<b>e</b>) 13 December 2016.</p>
Full article ">Figure 14
<p>Overall classification accuracies and Kappa coefficients for land cover classification from the thirty-one pairs with and without nine spectral indices. (1–5 stands for singe season March (1), May (2), July (3), October (4), December (5), 6–15 identifies two-seasons combined March + May (6), March + July (7), March + October (8), March + December (9), May + July (10), May + October (11), May + December (12), July + October (13), July + December (14), October + December (15), 16–25 identifies three-seasons combined March + May + July (16), March + May + October (17), March + May + December (18), March + July + October (19), March + July + December (20), March + October + December (21), May + July + October (22), May + July + December (23), May + October + December (24), July + October + December (25), 26–30 identifies four-seasons combined March + May + July + October (26), March + May + July + December (27), March + May + October + December (28), March + July + October + December (29), May + July + October + December (30), and 31 stands for five-seasons combined March + May + July + October + December (31), respectively).</p>
Full article ">
22 pages, 4503 KiB  
Article
Quantitative Assessment of the Impact of Physical and Anthropogenic Factors on Vegetation Spatial-Temporal Variation in Northern Tibet
by Qinwei Ran, Yanbin Hao, Anquan Xia, Wenjun Liu, Ronghai Hu, Xiaoyong Cui, Kai Xue, Xiaoning Song, Cong Xu, Boyang Ding and Yanfen Wang
Remote Sens. 2019, 11(10), 1183; https://doi.org/10.3390/rs11101183 - 18 May 2019
Cited by 55 | Viewed by 4246
Abstract
The alpine grassland on the Qinghai-Tibet Plateau covers an area of about 1/3 of China’s total grassland area and plays a crucial role in regulating grassland ecological functions. Both environmental changes and irrational use of the grassland can result in severe grassland degradation [...] Read more.
The alpine grassland on the Qinghai-Tibet Plateau covers an area of about 1/3 of China’s total grassland area and plays a crucial role in regulating grassland ecological functions. Both environmental changes and irrational use of the grassland can result in severe grassland degradation in some areas of the Qinghai-Tibet Plateau. However, the magnitude and patterns of the physical and anthropogenic factors in driving grassland variation over northern Tibet remain debatable, and the interactive influences among those factors are still unclear. In this study, we employed a geographical detector model to quantify the primary and interactive impacts of both the physical factors (precipitation, temperature, sunshine duration, soil type, elevation, slope, and aspect) and the anthropogenic factors (population density, road density, residential density, grazing density, per capita GDP, and land use type) on vegetation variation from 2000 to 2015 in northern Tibet. Our results show that the vegetation index in northern Tibet significantly decreased from 2000 to 2015. Overall, the stability of vegetation types was sorted as follows: the alpine scrub > the alpine steppe > the alpine meadow. The physical factors, rather than the anthropogenic factors, have been the primary driving factors for vegetation dynamics in northern Tibet. Specifically, meteorological factors best explained the alpine meadow and alpine steppe variation. Precipitation was the key factor that influenced the alpine meadow variation, whereas temperature was the key factor that contributed to the alpine steppe variation. The anthropogenic factors, such as population density, grazing density and per capita GDP, influenced the alpine scrub variation most. The influence of population density is highly similar to that of grazing density, which may provide convenient access to simplify the study of the anthropogenic activities in the Tibet plateau. The interactions between the driving factors had larger effects on vegetation than any single factor. In the alpine meadow, the interaction between precipitation and temperature can explain 44.6% of the vegetation variation. In the alpine scrub, the interaction between temperature and GDP was the highest, accounting for 27.5% of vegetation variation. For the alpine steppe, the interaction between soil type and population density can explain 29.4% of the vegetation variation. The highest value of vegetation degradation occurred in the range of 448–469 mm rainfall in the alpine meadow, 0.61–1.23 people/km2 in the alpine scrub and –0.83–0.15 °C in the alpine steppe, respectively. These findings could contribute to a better understanding of degradation prevention and sustainable development of the alpine grassland ecosystem in northern Tibet. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study area. Vegetation type data of northern Tibet were obtained from the Vegetation Map of China (1:1,000,000), provided by the Data Center for Resources and Environmental Sciences, Chinese Academy of Sciences (RESDC) (<a href="http://www.resdc.cn" target="_blank">http://www.resdc.cn</a>).</p>
Full article ">Figure 2
<p>The spatial distribution of physical factors (temperature, precipitation, sunshine duration, elevation, aspect, slope, and soil type) and anthropogenic factors (population density, land use, per capita GDP, grazing density, residential density, and road density) in northern Tibet. Gradient colors are used to reflect the discretization results of continuous variables, such as temperature, precipitation, sunshine duration, elevation, slope, population density, per capita GDP, grazing density, residential density, and road density. The categorization method of each variable was attached to the sub-graph title. EI, NB, QU, GI and SD stand for the discretization methods of Equal Interval, Natural Break, Quantile, Geometrical Interval and Standard Deviation, respectively. The number 2–8 means the discrete intervals of each method.</p>
Full article ">Figure 3
<p>The spatial distribution of average NDVI in growing season in northern Tibet.</p>
Full article ">Figure 4
<p>Spatial distribution of trends and corresponding significant levels of: (<b>a</b>) and (<b>e</b>) NDVI; (<b>b</b>) and (<b>f</b>) precipitation; (<b>c</b>) and (<b>g</b>) temperature; (<b>d</b>) and (<b>h</b>) sunshine duration, from 2000–2015 across northern Tibet.</p>
Full article ">Figure 5
<p>The MANN-KENDALL test to detect the annual trends of environment factors during 2000–2015. The trends of (<b>a</b>) NDVI, (<b>b</b>) Precipitation, (<b>c</b>) Temperature and (<b>d</b>) Sunshine duration are shown at four scales, that is, the whole region, the alpine meadow, the alpine scrub and the alpine steppe. The trends of (<b>e</b>) Population density and (<b>f</b>) Grazing density are only exhibited at the scale of the whole region, and trends at county scale are shown in <a href="#app1-remotesensing-11-01183" class="html-app">Appendix A</a> <a href="#remotesensing-11-01183-f0A1" class="html-fig">Figure A1</a>.</p>
Full article ">Figure 6
<p><span class="html-italic">q</span> values for seven physical factors and six anthropogenic factors guiding the distribution of NDVI variation. It denotes the determination power of driving factors on vegetation variation. Seven physical factors, As, ST, SD, SI, E, T and P represent aspect, soil type, sunshine duration, slope, elevation, temperature, and precipitation, respectively. Six anthropogenic factors, GDP, PD, GI, ReD, LU and RoD represent per capita GDP, population density, grazing density, residential density, land use, and road density, respectively.</p>
Full article ">Figure 7
<p>The relationship between mean NDVI slope and main driving factors. The horizontal axis of each subgraph denotes the subrange of different variables, while the vertical axis represents NDVI slope. The best fit curve is selected by comparing the results of various linear or nonlinear models. Each column displays the statistics for the specific vegetation types, namely, the alpine meadow, the alpine scrub and the alpine steppe. Each row represents the order of importance of factors from more important to less important. The marks- Ⅰ, Ⅱ, Ⅲ, Ⅳ, Ⅴ and Ⅵ-corresponds to the ranking listed in <a href="#remotesensing-11-01183-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 8
<p>Areas vulnerable to vegetation degradation in different vegetation types. The areas in royal blue, purple and dark blue represent the alpine meadow, the alpine scrub and the alpine steppe, respectively. The risk detector can detect the areas with high degradation risk by comparing vegetation degradation levels in different ranges of influence factors. To avoid spatial overlap and identify risky areas, in each vegetation type, we have selected the most contributing factors to vegetation degradation.</p>
Full article ">Figure A1
<p>The trends of (<b>a</b>) Population density and (<b>b</b>) Grazing density at county scale.</p>
Full article ">
15 pages, 2877 KiB  
Article
The Mangrove Forests Change and Impacts from Tropical Cyclones in the Philippines Using Time Series Satellite Imagery
by Mary Joy C. Buitre, Hongsheng Zhang and Hui Lin
Remote Sens. 2019, 11(6), 688; https://doi.org/10.3390/rs11060688 - 22 Mar 2019
Cited by 46 | Viewed by 16595
Abstract
The Philippines is rich in mangrove forests, containing 50% of the total mangrove species of the world. However, the vast mangrove areas of the country have declined to about half of its cover in the past century. In the 1970s, action was taken [...] Read more.
The Philippines is rich in mangrove forests, containing 50% of the total mangrove species of the world. However, the vast mangrove areas of the country have declined to about half of its cover in the past century. In the 1970s, action was taken to protect the remaining mangrove forests under a government initiative, recognizing the ecological benefits mangrove forests can bring. Here, we examine two mangrove areas in the Philippines—Coron in Palawan and Balangiga-Lawaan in Eastern Samar over a 30-year period. Sets of Landsat images from 1987 to 2016 were classified and spatially analyzed using four landscape metrics. Additional analyses of the mangrove areas’ spatiotemporal dynamics were conducted. The impact of typhoon landfall on the mangrove areas was also analyzed in a qualitative manner. Spatiotemporal changes indicate that both the Coron and Balangiga-Lawaan mangrove forests, though declared as protected areas, are still suffering from mangrove area loss. Mangrove areal shrinkage and expansion can be attributed to both typhoon occurrence and management practices. Overall, our study reveals which mangrove forests need more responsive action, and provides a different perspective in understanding the spatiotemporal dynamics of these mangrove areas. Full article
Show Figures

Figure 1

Figure 1
<p>Study Areas: (<b>Left</b>) Mangrove Areas of some of the villages in the Municipality of Coron, Palawan; (<b>Right</b>) Mangrove Areas of the Municipalities of Balangiga and Lawaan in Eastern Samar.</p>
Full article ">Figure 2
<p>Change of mangroves distribution of Coron in 1987–2016.</p>
Full article ">Figure 3
<p>Change of mangroves distribution of Balangiga-Lawaan in 1989–2016.</p>
Full article ">Figure 4
<p>Areal extent changes of mangroves in Coron and Balangiga-Lawaan during the study period.</p>
Full article ">Figure 5
<p>Areal change detection analysis of Coron and Balangia-Lawaan mangroves in multiple Decadal Periods (1987–2016).</p>
Full article ">Figure 6
<p>Landscape Metrics Comparison for Coron and Balangiga-Lawaan Mangroves (clockwise from top left): Class Area (CA); Patch Density (PD); Aggregation Index (AI); and Edge Density (ED).</p>
Full article ">Figure 7
<p>Areal changes of mangroves (lines) and yearly frequency of tropical cyclones (bars).</p>
Full article ">

Review

Jump to: Research, Other

19 pages, 2305 KiB  
Review
A Review of the Application of Remote Sensing Data for Abandoned Agricultural Land Identification with Focus on Central and Eastern Europe
by Tomáš Goga, Ján Feranec, Tomáš Bucha, Miloš Rusnák, Ivan Sačkov, Ivan Barka, Monika Kopecká, Juraj Papčo, Ján Oťaheľ, Daniel Szatmári, Róbert Pazúr, Maroš Sedliak, Jozef Pajtík and Jozef Vladovič
Remote Sens. 2019, 11(23), 2759; https://doi.org/10.3390/rs11232759 - 23 Nov 2019
Cited by 44 | Viewed by 7042
Abstract
This study aims to analyze and assess studies published from 1992 to 2019 and listed in the Web of Science (WOS) and Current Contents (CC) databases, and to identify agricultural abandonment by application of remote sensing (RS) optical and microwave data. We selected [...] Read more.
This study aims to analyze and assess studies published from 1992 to 2019 and listed in the Web of Science (WOS) and Current Contents (CC) databases, and to identify agricultural abandonment by application of remote sensing (RS) optical and microwave data. We selected 73 studies by applying structured queries in a field tag form and Boolean operators in the WOS portal and by expert analysis. An expert assessment yielded the topical picture concerning the definitions and criteria for the identification of abandoned agricultural land (AAL). The analysis also showed the absence of similar field research, which serves not only for validation, but also for understanding the process of agricultural abandonment. The benefit of the fusion of optical and radar data, which supports the application of Sentinel-1 and Sentinel-2 data, is also evident. Knowledge attained from the literary sources indicated that there exists, in the world literature, a well-covered problem of abandonment identification or biomass estimation, as well as missing works dealing with the assessment of the natural accretion of biomass in AAL. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Flowchart of literature database processing.</p>
Full article ">Figure 2
<p>Annual publications of the analyzed documents.</p>
Full article ">Figure 3
<p>Locations of study sites in related studies selected for analysis according to each workflow.</p>
Full article ">Figure 4
<p>Satellites with optical sensors used in related studies selected for analysis according to each workflow (AIR. = airborne).</p>
Full article ">Figure 5
<p>Satellites with microwave sensors used in related studies selected for analysis according to each workflow (AIR. = airborne).</p>
Full article ">Figure 6
<p>Methods used for the identification of AAL.</p>
Full article ">

Other

Jump to: Research, Review

9 pages, 2249 KiB  
Technical Note
Field-Scale Rice Yield Estimation Using Sentinel-1A Synthetic Aperture Radar (SAR) Data in Coastal Saline Region of Jiangsu Province, China
by Jianjun Wang, Qixing Dai, Jiali Shang, Xiuliang Jin, Quan Sun, Guisheng Zhou and Qigen Dai
Remote Sens. 2019, 11(19), 2274; https://doi.org/10.3390/rs11192274 - 29 Sep 2019
Cited by 37 | Viewed by 4588
Abstract
In recent years, a large number of salterns have been converted into rice fields in the coastal region of Jiangsu Province, Eastern China. The high spatial heterogeneity of soil salinity has caused large within-field variabilities in grain yield of rice. The identification of [...] Read more.
In recent years, a large number of salterns have been converted into rice fields in the coastal region of Jiangsu Province, Eastern China. The high spatial heterogeneity of soil salinity has caused large within-field variabilities in grain yield of rice. The identification of low-yield areas within a field is an important initial step for precision farming. While optical satellite remote sensing can provide valuable information on crop growth and yield potential, the availability of cloud-free optical image data is often hampered by unfavorable weather conditions. Synthetic aperture radar (SAR) offers an alternative due to its nearly day-and-night and all-weather capability in data acquisition. Given the free data access of the Sentinels, this study aimed at developing a Sentinel-1A-based SAR index for rice yield estimation. The proposed SAR simple difference (SSD) index uses the change of the Sentinel-1A backscatter in vertical-horizontal (VH) polarization between the end of the tillering stage and the end of grain filling stage (SSDVH). A strong exponential relationship has been identified between the SSDVH and rice yield, producing accurate yield estimation with a root mean square error (RMSE) of 0.74 t ha−1 and a relative error (RE) of 7.93%. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Map shows the location of the study area and the spatial distribution of 42 sampling sites within the field.</p>
Full article ">Figure 2
<p>Maps show measured rice grain yields at 42 sampling sites overlaid on top of images of VV ((<b>a</b>) &amp; (<b>b</b>)) and VH ((<b>c</b>) &amp; (<b>d</b>)) backscatter values (<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="sans-serif">σ</mi> <mn>0</mn> </msup> </mrow> </semantics></math>) in decibels (dB) calculated from two Sentinel-1A synthetic aperture radar (SAR) images acquired on July 19, 2016 ((<b>a</b>) &amp; (<b>c</b>)) and September 29, 2016 ((<b>b</b>) &amp; (<b>d</b>)).</p>
Full article ">Figure 3
<p>Scatter plots of SAR indices at single growth stage (x-axis) and grain yield (y-axis). (<b>a</b>), (<b>c</b>) &amp; (<b>e</b>) tillering stage, (<b>b</b>), (<b>d</b>) &amp; (<b>f</b>) grain filling stage.</p>
Full article ">Figure 4
<p>Scatter plots of SAR indices combining both growth stages (x-axis) and grain yield (y-axis). (<b>a</b>) SSD<sub>VV</sub>, (<b>b</b>) SSR<sub>VV</sub>, (<b>c</b>) SND<sub>VV</sub>, (<b>d</b>) SSD<sub>VH</sub>, (<b>e</b>) SSR<sub>VH</sub>, (<b>f</b>) SND<sub>VH</sub>, (<b>g</b>) SSD<sub>VV/VH</sub>, (<b>h</b>) SSR<sub>VV/VH</sub>, (<b>i</b>) SND<sub>VV/VH</sub>.</p>
Full article ">Figure 5
<p>Comparison between the measured rice grain yield and the grain yield estimated from the leave-one-out cross-validation (LOOCV) using SSD<sub>VH</sub> (<b>a</b>), SSR<sub>VH</sub> (<b>b</b>) and SND<sub>VH</sub> (<b>c</b>), respectively. The diagonal lines showed the 1:1 relationship.</p>
Full article ">
18 pages, 6434 KiB  
Case Report
An Agricultural Drought Index for Assessing Droughts Using a Water Balance Method: A Case Study in Jilin Province, Northeast China
by Yijing Cao, Shengbo Chen, Lei Wang, Bingxue Zhu, Tianqi Lu and Yan Yu
Remote Sens. 2019, 11(9), 1066; https://doi.org/10.3390/rs11091066 - 6 May 2019
Cited by 30 | Viewed by 6160
Abstract
Drought, which causes the economic, social, and environmental losses, also threatens food security worldwide. In this study, we developed a vegetation-soil water deficit (VSWD) method to better assess agricultural droughts. The VSWD method considers precipitation, potential evapotranspiration (PET) and soil moisture. The soil [...] Read more.
Drought, which causes the economic, social, and environmental losses, also threatens food security worldwide. In this study, we developed a vegetation-soil water deficit (VSWD) method to better assess agricultural droughts. The VSWD method considers precipitation, potential evapotranspiration (PET) and soil moisture. The soil moisture from different soil layers was compared with the in situ drought indices to select the appropriate depths for calculating soil moisture during growing seasons. The VSWD method and other indices for assessing the agricultural droughts, i.e., Scaled Drought Condition Index (SDCI), Vegetation Health Index (VHI) and Temperature Vegetation Dryness Index (TVDI), were compared with the in situ and multi-scales of Standardized Precipitation Evapotranspiration Index (SPEIs). The results show that the VSWD method has better performance than SDCI, VHI, and TVDI. Based on the drought events collected from field sampling, it is found that the VSWD method can better distinguish the severities of agricultural droughts than other indices mentioned here. Moreover, the performances of VSWD, SPEIs, SDCI and VHI in the major historical drought events recorded in the study area show that VSWD has generated the most sensible results than others. However, the limitation of the VSWD method is also discussed. Full article
Show Figures

Figure 1

Figure 1
<p>The study area: (<b>a</b>) The location of Northeast China; (<b>b</b>) the Digital Elevation Model data (DEM), Sample points collected during August to September of 2016, and the meteorological stations of Jilin Province.</p>
Full article ">Figure 2
<p>An example from multiple sources of datasets in September of 2016: (<b>a</b>) Potential evapotranspiration from moderate resolution imaging spectroradiometer (MODIS); (<b>b</b>) precipitation from tropical rainfall measuring mission (TRMM); (<b>c</b>) soil moisture (0–10 cm); and (<b>d</b>) soil moisture (10–40 cm) from global land data assimilation system (GLDAS).</p>
Full article ">Figure 3
<p>Photos taken from field sampling during August to September of 2016, showing different severities of agricultural droughts: (<b>a</b>) No drought; (<b>b</b>) slightly affected; (<b>c</b>) moderately affected; and (<b>d</b>) severely affected.</p>
Full article ">Figure 4
<p>(<b>a</b>) Annual precipitation; (<b>b</b>) potential evapotranspiration (PET); (<b>c</b>) the long-term differences between the precipitation and PET (1961–2008); (<b>d</b>) the bar chart of the monthly average precipitation (PRE), potential evapotranspiration (PET), and differences between PRE and PET (PRE-PET) in Baicheng (BC), Changchun (CC), and Yanbian (YB).</p>
Full article ">Figure 5
<p>Time series of vegetation-soil water deficit (VSWD) and Standardized Precipitation Evapotranspiration Index (SPEIs) during growing seasons from 2008 to 2016 in Baicheng, Changchun and Yanbian.</p>
Full article ">Figure 6
<p>Comparing VSWD with (<b>a</b>) precipitation; (<b>b</b>) soil moisture; and (<b>c</b>) PET during growing seasons from 2008 to 2013.</p>
Full article ">Figure 7
<p>Spatio-temporal drought conditions in the Jilin province indicated by multiple drought indices during the growing season (May–September) of 2016.</p>
Full article ">Figure 8
<p>The average index value of VSWD, temperature vegetation dryness index (TVDI), scaled drought condition index (SDCI), and vegetation health index (VHI) for each drought severity.</p>
Full article ">Figure 9
<p>The validation of VSWD with the CDD (the red ellipse represents the records drought events from the China Drought Dataset).</p>
Full article ">Figure 10
<p>The performances of VSWD and the three-month standardized precipitation evapotranspiration index (SPEI-3) in the major drought event occurred in Qianguo, Changling and Songyuan during June 2010 (Black ellipse represents the drought event; red rectangle indicates precipitation, PET and soil moisture in June 2010).</p>
Full article ">Figure 11
<p>The performances of VSWD, VHI and SDCI in the major droughts event occurred in Songyuan and Liaoyuan (September 2009) and in Shuangyang (July 2011) (Black ellipse represents the drought events; red rectangle indicates PCI, TCI and VCI in the drought event months).</p>
Full article ">
Back to TopTop