[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing for Crop Monitoring and Yield Estimation

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Remote Sensing in Agriculture and Vegetation".

Deadline for manuscript submissions: closed (31 December 2021) | Viewed by 126835

Special Issue Editors

Department of Land, Air and Water Resources, University of California, Davis, 133 Veihmeyer Hall, One Shields Ave, CA 95616-8627, USA
Interests: remote sensing; data fusion and applications; agricultural monitoring; urban studies; environmental heath
Special Issues, Collections and Topics in MDPI journals
Department of Land, Air and Water Resources, University of California Davis, 133 Veihmeyer Hall, One Shields Ave., Davis, CA 95616-8627, USA
Interests: remote sensing; drivers and consequences of wildland fires; crop monitoring and precision agriculture; eco-hydrology; vegetation-climate-fire-human interaction; machine learning; UAV applications; geospatial technology
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Global food security will remain a worldwide concern, especially in the face of challenges from climate change, population growth, water scarcity, environmental degradation, and biodiversity loss. Improving yields and maintaining agricultural sustainability through argoecological approaches and scientific farming management are of the utmost importance. The use of remote sensing in monitoring crop conditions and production estimates has proven to be very useful and supportive for agricultural management from local to regional, continental, and global scales. Many previous efforts have been made to advance the monitoring of crop conditions including the blooming and phenology cycle, health and productivity, drought and heat stress, and other processes. These remote sensing based indicators of critical crop conditions are always integrated with crop characteristics, climatic and soil variables, and auxiliary variables to build yield prediction models for cost-effective estimates of crop productions at different spatial-temporal scales.

Nowadays, the emerging satellite missions, remote sensing sensors, geospatial big data, and the development of artificial intelligence and machine learning have provided further new opportunities for a better understanding of the crop’s physical and biophysical process. This Special Issue calls for innovative data, methods, and analysis techniques for remote sensing-based crop monitoring and yield estimations. Acceptable topics include, but are not limited to, crop condition monitoring, crop phenology, crop stress detection, remote sensing indicators of crops, crop yield prediction, controls on yield potentials, drivers of yield variability, and multi-source data integration for sustainable agriculture.

You may choose our Joint Special Issue in Land.

Dr. Bin Chen
Dr. Yufang Jin
Prof. Dr. Le Yu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Remote sensing agriculture
  • Crop yield estimation and prediction
  • Crop types and cropping intensity
  • Crop phenology, stress, and health status
  • Controls and drivers of yield variability
  • Remote sensing spectral indices for crops
  • Multi-scale monitoring and mapping of crop yields
  • Multi-source data fusion for sustainable agriculture

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (23 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

16 pages, 4039 KiB  
Article
Remote Sensing—Based Assessment of the Water-Use Efficiency of Maize over a Large, Arid, Regional Irrigation District
by Lei Jiang, Yuting Yang and Songhao Shang
Remote Sens. 2022, 14(9), 2035; https://doi.org/10.3390/rs14092035 - 23 Apr 2022
Cited by 6 | Viewed by 2681
Abstract
Quantitative assessment of crop water-use efficiency (WUE) is an important basis for high-efficiency use of agricultural water. Here we assess the WUE of maize in the Hetao Irrigation District, which is a representative irrigation district in the arid region of Northwest China. Specifically, [...] Read more.
Quantitative assessment of crop water-use efficiency (WUE) is an important basis for high-efficiency use of agricultural water. Here we assess the WUE of maize in the Hetao Irrigation District, which is a representative irrigation district in the arid region of Northwest China. Specifically, we firstly mapped the location of the maize field by using a remote sensing/phenological–based vegetation classifier and then quantified the maize water use and yield by using a dual-source remote-sensing evapotranspiration (ET) model and a crop water production function, respectively. Validation results show that the adopted phenological-based vegetation classifier performed well in mapping the spatial distributions and inter-annual variations of maize planting, with a kappa coefficient of 0.86. In addition, the ET model based on the hybrid dual-source scheme and trapezoid framework also obtained high accuracy in spatiotemporal ET mapping, with an RMSE of 0.52 mm/day at the site scale and 26.21 mm/year during the maize growing season (April–October) at the regional scale. Further, the adopted crop water production function showed high accuracy in estimating the maize yield, with a mean relative error of only 4.3%. Using the estimated ET, transpiration, and yield of maize, the mean maize WUE based on ET and transpiration in the study region were1.94 kg/m3 and 3.06 kg/m3, respectively. Our results demonstrate the usefulness and validity of remote sensing information in mapping regional crop WUE. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of the study region (<b>a</b>), location of maize sampling points (<b>b</b>),and land cover classification in the study region as of 2000 (<b>c</b>).</p>
Full article ">Figure 2
<p>Comparisons of ET estimated from HTEM with observed ET at field scale (<b>a</b>) and regional ET estimated from water balance analysis (<b>b</b>).</p>
Full article ">Figure 3
<p>Spatial distributions of HTEM estimated total ET (<b>a</b>), E (<b>b</b>), and T (<b>c</b>) during the growing season (April to October) in 2012.</p>
Full article ">Figure 4
<p>Mean NDVI series for sampled points and fitted asymmetric logistic function in 2012 (<b>a</b>) and the developed ellipse classifier (<b>b</b>).</p>
Full article ">Figure 5
<p>Comparisons of statistical and classified maize planting areas from 2003 to 2012 (<b>a</b>) and spatial distribution of maize in 2012 (<b>b</b>).</p>
Full article ">Figure 6
<p>Variations in maize ET and T in Hangjinhouqi during the growing season of 2012.</p>
Full article ">Figure 7
<p>Spatial distributions of maize WUE<sub>ET</sub> from 2003 to 2012.</p>
Full article ">Figure 8
<p>Spatial distributions of maize WUE<sub>T</sub> from 2003 to 2012.</p>
Full article ">Figure 9
<p>Average WUE of maize in three counties during the study period.</p>
Full article ">Figure 10
<p>Variations in maize evapotranspiration (ET), transpiration (T), and total water input (precipitation + net water diversion, P+Wn) (<b>a</b>) and maize yield, WUE<sub>ET</sub>, and WUE<sub>T</sub> (<b>b</b>) in the study period.</p>
Full article ">
16 pages, 5530 KiB  
Article
Retrospective Predictions of Rice and Other Crop Production in Madagascar Using Soil Moisture and an NDVI-Based Calendar from 2010–2017
by Angela J. Rigden, Christopher Golden and Peter Huybers
Remote Sens. 2022, 14(5), 1223; https://doi.org/10.3390/rs14051223 - 2 Mar 2022
Cited by 8 | Viewed by 5777
Abstract
Malagasy subsistence farmers, who comprise 70% of the nearly 26 million people in Madagascar, often face food insecurity because of unreliable food production systems and adverse crop conditions. The 2020–2021 drought in Madagascar, in particular, is associated with an exceptional food crisis, yet [...] Read more.
Malagasy subsistence farmers, who comprise 70% of the nearly 26 million people in Madagascar, often face food insecurity because of unreliable food production systems and adverse crop conditions. The 2020–2021 drought in Madagascar, in particular, is associated with an exceptional food crisis, yet we are unaware of peer-reviewed studies that quantitatively link variations in weather and climate to agricultural outcomes for staple crops in Madagascar. In this study, we use historical data to empirically assess the relationship between soil moisture and food production. Specifically, we focus on major staple crops that form the foundation of Malagasy food systems and nutrition, including rice, which accounts for 46% of the average Malagasy caloric intake, as well as cassava, maize, and sweet potato. Available data associated with survey-based crop statistics constrain our analysis to 2010–2017 across four clusters of Malagasy districts. Strong correlations are observed between remotely sensed soil moisture and rice production, ranging between 0.67 to 0.95 depending on the cluster and choice of crop calendar. Predictions are shown to be statistically significant at the 90% confidence level using bootstrapping techniques, as well as through an out-of-sample prediction framework. Soil moisture also shows skill in predicting cassava, maize, and sweet potato production, but only when the months most vulnerable to water stress are isolated. Additional analyses using more survey data, as well as potentially more-refined crop maps and calendars, will be useful for validating and improving soil-moisture-based predictions of yield. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Satellite-based annual average (<b>a</b>) soil moisture and (<b>b</b>) vegetation activity (NDVI) from 2010 to 2017. Districts are outlined on each map.</p>
Full article ">Figure 2
<p>Survey-based average crop production (in tonnes) for each district, including (<b>a</b>) rice, (<b>b</b>) cassava, (<b>c</b>) maize, and (<b>d</b>) sweet potato and the associated harvested area fractions for (<b>e</b>) rice, (<b>f</b>) cassava, (<b>g</b>) maize, and (<b>h</b>) sweet potato. The harvested area fraction represents the average fractional proportion of a 0.05-degree grid box that was harvested in a crop during the 1997–2003 era [<a href="#B37-remotesensing-14-01223" class="html-bibr">37</a>]. In (<b>e</b>–<b>h</b>), grid boxes with harvested area fractions less than 0.001 are colored gray.</p>
Full article ">Figure 3
<p>The four calendars used to temporally average soil moisture anomalies for comparison with rice production anomalies (see <a href="#app1-remotesensing-14-01223" class="html-app">Figures S5–S7</a> for analogous plots of cassava, maize, and sweet potato). The top row represents the starting month of the temporal average for calendars (<b>a</b>) C1, (<b>b</b>) C2, (<b>c</b>) C3, and (<b>d</b>) C4. The bottom represents the ending month of the temporal average for calendars (<b>e</b>) C1, (<b>f</b>) C2, (<b>g</b>) C3, and (<b>h</b>) C4. C1 and C2 represent mean planting (start) and mean harvesting (end) dates, while C3 and C4 represent temporal periods based on NDVI. For C2 (<b>b</b>,<b>f</b>), we have mapped the rainy season calendar (see <a href="#app1-remotesensing-14-01223" class="html-app">Figure S8</a> for separate maps of the rainy and dry seasons and <a href="#app1-remotesensing-14-01223" class="html-app">Supplementary Data 1</a> for details on C2). The starting month for C3 (<b>c</b>) and C4 (<b>d</b>) are identical, as they represent the minimum in NDVI after the climatological seasonal cycle is smoothed with a 5-month moving window. For all calendars, the starting month can be in the year prior to harvest.</p>
Full article ">Figure 4
<p>Districts are grouped into four climate regions based on the average soil moisture within the district from 2010 to 2017. We refer to these regions from low to high soil moisture as “arid”, “semi-arid”, “semi-wet”, and “wet”.</p>
Full article ">Figure 5
<p>Observations and predictions of rice production anomalies (tonnes) in the (<b>a</b>) arid, (<b>b</b>) semi-arid, (<b>c</b>) semi-wet, and (<b>d</b>) wet climate-zones. Note that the y-axis of each subplot has a different scale. Anomalies in production are computed as a deviation from a least-squares fit linear trend (see <a href="#remotesensing-14-01223-t001" class="html-table">Table 1</a> for the average total production in each region). Predictions are made using the optimal growing season length (<math display="inline"><semantics> <msubsup> <mover> <mi>SM</mi> <mo>¯</mo> </mover> <mrow> <mi mathvariant="normal">C</mi> <mn>4</mn> </mrow> <mo>′</mo> </msubsup> </semantics></math>) following Equation (<a href="#FD1-remotesensing-14-01223" class="html-disp-formula">1</a>). Observations are plotted with a solid line and predictions with a dotted line. See <a href="#app1-remotesensing-14-01223" class="html-app">Figures S12–S14</a> for analogous plots for maize, cassava, and sweet potatoes.</p>
Full article ">
22 pages, 3921 KiB  
Article
The Accuracy of Winter Wheat Identification at Different Growth Stages Using Remote Sensing
by Shengwei Liu, Dailiang Peng, Bing Zhang, Zhengchao Chen, Le Yu, Junjie Chen, Yuhao Pan, Shijun Zheng, Jinkang Hu, Zihang Lou, Yue Chen and Songlin Yang
Remote Sens. 2022, 14(4), 893; https://doi.org/10.3390/rs14040893 - 13 Feb 2022
Cited by 26 | Viewed by 5308
Abstract
The aim of this study was to explore the differences in the accuracy of winter wheat identification using remote sensing data at different growth stages using the same methods. Part of northern Henan Province, China was taken as the study area, and the [...] Read more.
The aim of this study was to explore the differences in the accuracy of winter wheat identification using remote sensing data at different growth stages using the same methods. Part of northern Henan Province, China was taken as the study area, and the winter wheat growth cycle was divided into five periods (seeding-tillering, overwintering, reviving, jointing-heading, and flowering-maturing) based on monitoring data obtained from agrometeorological stations. With the help of the Google Earth Engine (GEE) platform, the separability between winter wheat and other land cover types was analyzed and compared using the Jeffries-Matusita (J-M) distance method. Spectral features, vegetation index, water index, building index, texture features, and terrain features were generated from Sentinel-2 remote sensing images at different growth periods, and then were used to establish a random forest classification and extraction model. A deep U-Net semantic segmentation model based on the red, green, blue, and near-infrared bands of Sentinel-2 imagery was also established. By combining models with field data, the identification of winter wheat was carried out and the difference between the accuracy of the identification in the five growth periods was analyzed. The experimental results show that, using the random forest classification method, the best separability between winter wheat and the other land cover types was achieved during the jointing-heading period: the overall identification accuracy for the winter wheat was then highest at 96.90% and the kappa coefficient was 0.96. Using the deep-learning classification method, it was also found that the semantic segmentation accuracy of winter wheat and the model performance were best during the jointing-heading period: a precision, recall, F1 score, accuracy, and IoU of 0.94, 0.93, 0.93, and 0.88, respectively, were achieved for this period. Based on municipal statistical data for winter wheat, the accuracy of the extraction of the winter wheat area using the two methods was 96.72% and 88.44%, respectively. Both methods show that the jointing-heading period is the best period for identifying winter wheat using remote sensing and that the identification made during this period is reliable. The results of this study provide a scientific basis for accurately obtaining the area planted with winter wheat and for further studies into winter wheat growth monitoring and yield estimation. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Geographical location of the study area: (<b>a</b>) Distribution of sample points. The main map is based on the 30-m global land cover product fine classification system for 2020, (<b>b</b>) Annual cumulative precipitation data in 2020, (<b>c</b>) Monthly average temperature data in 2020.</p>
Full article ">Figure 2
<p>Results for the accuracy achieved using the random forest algorithm. S, O, R, J and M stand for the seeding-tillering, overwintering, reviving, joining-heading and flowering-maturing periods, respectively. The user accuracy (UA) (blue), producer accuracy (PA) (yellow) and overall accuracy (OA) (purple) are plotted on the left-hand vertical axis. The kappa coefficient (pink) is plotted on the right-hand vertical axis.</p>
Full article ">Figure 3
<p>Winter wheat spatial distribution from five growth periods and their difference produced by the random forest classification. (<b>a</b>) The spatial distribution of winter wheat at the jointing-heading stage (reference image), it was subtracted from the winter wheat classification corresponding to other periods to generate the difference maps (<b>b</b>–<b>e</b>). In the bar charts, I, II and III represent the ratio of the number of correctly predicted, wrongly predicted and ‘missing’ winter wheat pixels to the union of the number of predicted winter wheat pixels and the true number, respectively. The red square, circle, triangle and pentagon represent the locations of the subsets shown in <a href="#remotesensing-14-00893-f004" class="html-fig">Figure 4</a>.</p>
Full article ">Figure 4
<p>Results for the identification of winter wheat in four representative areas of the study area using random forest classification: (<b>a</b>,<b>b</b>) represent, respectively, winter wheat areas with dense planting, flat terrain; (<b>c</b>,<b>d</b>) represent, respectively, winter wheat areas with fragmented land use and complex terrain (Sentinel-2 false color composite images, bands 8/2/3). Parts (1) to (5) in each row show the pixels identified as winter wheat using the random forest classification during the seeding-tillering, overwintering, reviving and jointing-heading period, respectively.</p>
Full article ">Figure 5
<p>Accuracy and loss functions of the training and validation sets for different growth periods. The black curve represents the training set accuracy, the red curve represents the validation set accuracy, the blue curve represents the loss function of the training set and the green curve represents the loss function of the validation set. The dotted line marks the number of training epochs corresponding to the optimal weight.</p>
Full article ">Figure 6
<p>Accuracy of the identification of winter wheat using the test set. S, O, R, J and M represent the seeding-tillering, overwintering, reviving, jointing-heading and flowering-maturing periods, respectively. The numbers in bold indicate the best value for each evaluation index.</p>
Full article ">Figure 7
<p>Results of the semantic segmentation of winter wheat using the test set. I-IV, respectively, represent winter wheat planting areas in an area of flat terrain, an area with multiple land-cover types, an area of fragmented terrain and an area of complex terrain. (<b>a</b>) True color Sentinel-2 composite images; (<b>b</b>) corresponding sample labels. (<b>c</b>–<b>g</b>), respectively, show the results of the semantic segmentation of the sample data for the seeding-tillering, overwintering, rejuvenation, jointing-heading and flowering-maturing periods.</p>
Full article ">Figure 8
<p>Mapping of winter wheat based on deep learning classification. (<b>a</b>) U-Net classification map of winter wheat for the jointing-heading period (reference image); it was subtracted from the winter wheat classification map corresponding to other periods to generate difference maps (<b>b</b>–<b>e</b>). In the bar charts, I, II and III represents the ratio of the number of correctly predicted, wrongly predicted and ‘missing’ winter wheat pixels to the union of the number of predicted winter wheat pixels and the true number, respectively.</p>
Full article ">
22 pages, 11735 KiB  
Article
Monitoring Post-Flood Recovery of Croplands Using the Integrated Sentinel-1/2 Imagery in the Yangtze-Huai River Basin
by Miao Li, Tao Zhang, Ying Tu, Zhehao Ren and Bing Xu
Remote Sens. 2022, 14(3), 690; https://doi.org/10.3390/rs14030690 - 1 Feb 2022
Cited by 13 | Viewed by 4672
Abstract
The increasingly frequent flooding imposes tremendous and long-lasting damages to lives and properties in impoverished rural areas. Rapid, accurate, and large-scale flood mapping is urgently needed for flood management, and to date has been successfully implemented benefiting from the advancement in remote sensing [...] Read more.
The increasingly frequent flooding imposes tremendous and long-lasting damages to lives and properties in impoverished rural areas. Rapid, accurate, and large-scale flood mapping is urgently needed for flood management, and to date has been successfully implemented benefiting from the advancement in remote sensing and cloud computing technology. Yet, the effects of agricultural emergency response to floods have been limitedly evaluated by satellite-based remote sensing, resulting in biased post-flood loss assessments. Addressing this challenge, this study presents a method for monitoring post-flood agricultural recovery using Sentinel-1/2 imagery, tested in three flood-affected main grain production areas, in the middle and lower Yangtze and Huai River, China. Our results indicated that 33~72% of the affected croplands were replanted and avoided total crop failures in summer 2020. Elevation, flood duration, crop rotation scheme, and flooding emergency management affect the post-flood recovery performance. The findings also demonstrate rapid intervention measures adjusted to local conditions could reduce the agricultural failure cost from flood disasters to a great extent. This study provides a new alternative for comprehensive disaster loss assessment in flood-prone agricultural regions, which will be insightful for worldwide flood control and management. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Geographic locations, flooding extents, and available Sentinel-1 imagery in July and August 2020 at three study areas: (<b>a</b>) the Huai River, (<b>b</b>) the Poyang Lake and, (<b>c</b>) the Chao Lake.</p>
Full article ">Figure 2
<p>In-site observations were collected on 20 September 2020, two months after the local flooding event in the Sanjiao Township, Yongxiu County. Red triangles show spatial locations and viewing direction of the photos. The background is Sentinel-1 VV polarized image acquired on 18 September 2020. Photos show croplands at different stages of waterlogging and recovery: (<b>a</b>) water logging, (<b>b</b>) recovered, (<b>c</b>) initial recovery, (<b>d</b>) severe waterlogging on the left and intial recovery on the right side of houses.</p>
Full article ">Figure 3
<p>Flowchart of the methodology.</p>
Full article ">Figure 4
<p>The procedure of water delineation methods comparison. (<b>a</b>) Grid tiles generation by overlapping floodplain dataset, Sentinel-1 and Sentinel-2 imagery with close dates; (<b>b</b>) Water/land samplings creation through visual interpretation within the grids of 0.2 × 0.2 degrees; (<b>c</b>) grid tiles of 0.05 × 0.05 degrees overlaying the Sentinel-1 imagery for water fraction calculation and methods comparison.</p>
Full article ">Figure 5
<p>The RGB composite of Sentinel-2 imagery in close dates of 2019 and 2020 shows gradual agricultural recovery after the flooding in 2020 in the Changzhou Township, Poyang County. The red and orange points indicate diverse recovery conditions due to different positions, cropping intensity, and flooding duration. The solid and dashed lines represent temporal trajectories of NDVI based on the two selected pixels in 2020 and 2019 separately. NDVI was derived from time series of Sentinel-2 imagery. (<b>a</b>–<b>e</b>) Sentinel-2 images acquired after flooding in 2020, (<b>f</b>–<b>j</b>) Sentinel-2 images in 2019 acquired with close dates in correspondance to the year of 2020.</p>
Full article ">Figure 6
<p>The relationships between reference water ratios in grids of 0.05 × 0.05 degrees derived from Random Forest algorithm using Sentinel-2 imagery and classified water ratios derived from (<b>a</b>) constant threshold of −16.5 dB, (<b>b</b>) the Global Otsu Algorithm [<a href="#B57-remotesensing-14-00690" class="html-bibr">57</a>], and (<b>c</b>) the adaptive local threshold method [<a href="#B58-remotesensing-14-00690" class="html-bibr">58</a>] using Sentinel-1 imagery. Grey dashed lines are the 1:1 lines.</p>
Full article ">Figure 7
<p>(<b>a</b>) Recovery area, and (<b>b</b>) recovery percentage statistics of the flood-affected croplands until late October, November, and December in the three study sites.</p>
Full article ">Figure 8
<p>Spatial pattern of agricultural recovery near the Huai River. The zoom-in maps show local recovery conditions at (<b>a</b>) a flood diversion and storage area and (<b>b</b>) edges of a tributary.</p>
Full article ">Figure 9
<p>Spatial and temporal patterns of agricultural recovery around the Chao Lake. The zoom-in maps show recovery conditions of the flooded cropland mainly due to (<b>a</b>) rising waters and (<b>b</b>) flood diversion.</p>
Full article ">Figure 10
<p>Spatial and temporal patterns of agricultural recovery around the Poyang Lake. The zoom-in maps highlight different distributions of recovered cropland owing to diverse stricken geographical locations: (<b>a</b>) recovered from away to close to the Poyang Lake; (<b>b</b>) recovered from outer to inner side of a polder; (<b>c</b>) unrecovered cropland as encircled by lake water.</p>
Full article ">Figure 11
<p>Joint kernel density estimation (KDE) plots show DOY delays of maximum NDVI due to flood disasters in 2020 compared with 2019 and 2018 in the three study areas: (<b>a</b>) the Huai River Region; (<b>b</b>) the Poyang Lake Region; (<b>c</b>) the Chao Lake Region. The KDE plots were generated by 5000 random points within the recovered flood-affected croplands.</p>
Full article ">Figure 12
<p>Probability density function (PDF) plots show differences between the inundation frequency three months after the flood peaks in 2020 and that of the same period in 2019: (<b>a</b>) the Huai River Region; (<b>b</b>) the Poyang Lake Region; (<b>c</b>) the Chao Lake Region. The PDF plots were generated using 2500 points within the recovered croplands and 2500 points within the unrecovered croplands by stratified random sampling.</p>
Full article ">Figure 13
<p>Local details of the classification results. The maximum NDVI composite from mid-September to December in 2019 and 2020, the zoom-in agricultural recovery map and local characteristics of recovered and unrecovered croplands in the east of the Poyang Lake (<b>a1</b>–<b>a4</b>), northeast of the Chao Lake (<b>b1</b>–<b>b4</b>) and south of the Huai River (<b>c1</b>–<b>c4</b>).</p>
Full article ">Figure 14
<p>Frequency histograms of maximum NDVI values after 15 September 2019 at the area of classified unrecovered croplands. The dash line indicates a NDVI value of 0.5. (<b>a</b>–<b>c</b>) represent the study site in the Chao Lake, the Poyang Lake, and the Huai River.</p>
Full article ">
21 pages, 8892 KiB  
Article
Radiative Transfer Image Simulation Using L-System Modeled Strawberry Canopies
by Zhen Guan, Amr Abd-Elrahman, Vance Whitaker, Shinsuke Agehara, Benjamin Wilkinson, Jean-Philippe Gastellu-Etchegorry and Bon Dewitt
Remote Sens. 2022, 14(3), 548; https://doi.org/10.3390/rs14030548 - 24 Jan 2022
Cited by 5 | Viewed by 3434
Abstract
The image-based modeling and simulation of plant growth have numerous and diverse applications. In this study, we used image-based and manual field measurements to develop and validate a methodology to simulate strawberry (Fragaria × ananassa Duch.) plant canopies throughout the Florida strawberry [...] Read more.
The image-based modeling and simulation of plant growth have numerous and diverse applications. In this study, we used image-based and manual field measurements to develop and validate a methodology to simulate strawberry (Fragaria × ananassa Duch.) plant canopies throughout the Florida strawberry growing season. The simulated plants were used to create a synthetic image using radiative transfer modeling. Observed canopy properties were incorporated into an L-system simulator, and a series of strawberry canopies corresponding to specific weekly observation dates were created. The simulated canopies were compared visually with actual plant images and quantitatively with in-situ leaf area throughout the strawberry season. A simple regression model with L-system-derived and in-situ total leaf areas had an Adj R2 value of 0.78. The L-system simulated canopies were used to derive information needed for image simulation, such as leaf area and leaf angle distribution. Spectral and plant canopy information were used to create synthetic high spatial resolution multispectral images using the Discrete Anisotropic Radiative Transfer (DART) software. Vegetation spectral indices were extracted from the simulated image and used to develop multiple regression models of in-situ biophysical parameters (leaf area and dry biomass), achieving Adj R2 values of 0.63 and 0.50, respectively. The Normalized Difference Vegetation Index (NDVI) and the Red Edge Simple Ratio (SRre) vegetation indices, which utilize the red, red edge, and near infrared bands of the spectrum, were identified as statistically significant variables (p < 0.10). This study showed that both geometric (canopy seize metrics) and spectral variables were successful in modeling in-situ biomass and leaf area. Combining the geometric and spectral variables, however, only slightly improved the prediction model. These results show the feasibility of simulating strawberry canopies and images with inherent geometrical, topological, and spectral properties of real strawberry plants. The simulated canopies and images can be used in applications beyond creating realistic computer graphics for quantitative applications requiring the depiction of vegetation biological processes, such as stress modeling and remote sensing mission planning. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study site: (<b>A</b>) University of Florida’s Gulf Coast Research and Education Center, (<b>B</b>) orthophoto of mobile image, (<b>C</b>) section of the six imaged beds (study plots are located in the two central beds), and (<b>D</b>) a single plant of “Florida127” strawberry.</p>
Full article ">Figure 2
<p>High-resolution, close-range image locations and orientations.</p>
Full article ">Figure 3
<p>Workflow for generating L-system strawberry models and the results assessment.</p>
Full article ">Figure 4
<p>Modeling of a single leaf. (<b>A</b>) Point cloud generated from the close-range images of a plant canopy. (<b>B</b>) Extracted point cloud of a single leaf. (<b>C</b>) Corresponding mesh generated from the point cloud.</p>
Full article ">Figure 5
<p>Modeling of a sample leaflet. (<b>A</b>) Digital surface model and the location of (16 × 16) sampled points. (<b>B</b>–<b>D</b>) Point cloud and L-system surface comparisons viewed from different angles.</p>
Full article ">Figure 6
<p>Two critical angles controlling the shape, size, and orientation of a petiole (leaf petiole).</p>
Full article ">Figure 7
<p>Observed number of petioles on a single plant vs. days after planting.</p>
Full article ">Figure 8
<p>Modeling the growth function of the length of the petioles. (<b>A</b>) Petiole (leaf petiole) length over time. (<b>B</b>) Petiole length time-shifted to the same start time for all petioles. (<b>C</b>) Asymptotic function fitting. (<b>D</b>) Asymptotic function normalized (values 0–1) to input into L-Studio function editor.</p>
Full article ">Figure 9
<p>Example of a real plant canopy vs. an L-system generated canopy throughout the growing season. (<b>A</b>) 40 days after planting. (<b>B</b>) 47 days after planting. (<b>C</b>) 54 days after planting. (<b>D</b>) 66 days after planting. (<b>E</b>) 80 days after planting. (<b>F</b>) 87 days after planting. (<b>G</b>) J94 days after planting. (<b>H</b>) 108 days after planting.</p>
Full article ">Figure 10
<p>In-situ vs. L-system total leaf area for a strawberry plant canopy.</p>
Full article ">Figure 11
<p>(<b>A</b>–<b>C</b>) L-system-derived canopy size metrics throughout the season. (<b>D</b>) L-system-derived and in-situ leaf area throughout the season.</p>
Full article ">Figure 12
<p>True color visualization of a synthetic image.</p>
Full article ">Figure 13
<p>Adjusted <span class="html-italic">R<sup>2</sup></span> of multiple regression modeling L-system leaf area, in-situ leaf area, and in-situ dry biomass as dependent variables, with three different sets of independent variables: (<b>A</b>) canopy size metrics extracted from mobile images, (<b>B</b>) L-system canopy size metrics, (<b>C</b>) vegetation indices of simulated images, and (<b>D</b>) vegetation indices and L-system canopy size metrics. Dependent variables: (blue) L-system leaf area, (orange) in-situ leaf area, and (gray) in-situ dry biomass.</p>
Full article ">Figure A1
<p>Using the Fieldspec4 Spectroradiometer to measure (<b>A</b>) leaf spectra using the leaf clip, and (<b>B</b>) soil and plastic bed spectra using the pistol grip.</p>
Full article ">Figure A2
<p>Growth function in L-studio editor. (<b>A</b>) Growth function of a petiole. (<b>B</b>) Growth function of a leaf.</p>
Full article ">Figure A3
<p>L-studio framework and components.</p>
Full article ">
18 pages, 3427 KiB  
Article
Classification of Daily Crop Phenology in PhenoCams Using Deep Learning and Hidden Markov Models
by Shawn D. Taylor and Dawn M. Browning
Remote Sens. 2022, 14(2), 286; https://doi.org/10.3390/rs14020286 - 9 Jan 2022
Cited by 17 | Viewed by 4378
Abstract
Near-surface cameras, such as those in the PhenoCam network, are a common source of ground truth data in modelling and remote sensing studies. Despite having locations across numerous agricultural sites, few studies have used near-surface cameras to track the unique phenology of croplands. [...] Read more.
Near-surface cameras, such as those in the PhenoCam network, are a common source of ground truth data in modelling and remote sensing studies. Despite having locations across numerous agricultural sites, few studies have used near-surface cameras to track the unique phenology of croplands. Due to management activities, crops do not have a natural vegetation cycle which many phenological extraction methods are based on. For example, a field may experience abrupt changes due to harvesting and tillage throughout the year. A single camera can also record several different plants due to crop rotations, fallow fields, and cover crops. Current methods to estimate phenology metrics from image time series compress all image information into a relative greenness metric, which discards a large amount of contextual information. This can include the type of crop present, whether snow or water is present on the field, the crop phenology, or whether a field lacking green plants consists of bare soil, fully senesced plants, or plant residue. Here, we developed a modelling workflow to create a daily time series of crop type and phenology, while also accounting for other factors such as obstructed images and snow covered fields. We used a mainstream deep learning image classification model, VGG16. Deep learning classification models do not have a temporal component, so to account for temporal correlation among images, our workflow incorporates a hidden Markov model in the post-processing. The initial image classification model had out of sample F1 scores of 0.83–0.85, which improved to 0.86–0.91 after all post-processing steps. The resulting time series show the progression of crops from emergence to harvest, and can serve as a daily, local-scale dataset of field states and phenological stages for agricultural research. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Example images from the PhenoCam network used in this study. The image sources and annotations are as follows: (<b>A</b>) is from arsope3ltar on 17 September 2018 with soybeans in the growth stage, (<b>B</b>) is from cafcookwestltar01 on 15 August 2017 with wheat which is fully senesced, (<b>C</b>) is kelloggcorn on 27 November 2018 and is blurry for all categories, (<b>D</b>) is arsltarmdcr on 28 August 2017 with corn in the flowers stage, (<b>E</b>) is from mandanh5 on 27 June 2017 and has corn in the growth stage with a dominant cover of residue, (<b>F</b>) is from arsgacp1 on 25 August 2016 with dominant cover of plant residue and no crop present, (<b>G</b>) is from NEON.D06.KONA.DP1.00033 on 24 December 2017 with dominant cover of snow and no crop present, (<b>H</b>) is from hawbeckereddy on 26 November 2015 with an unknown crop type in the growth state, and (<b>I</b>) is from bouldinalfalfa on 22 October 2019 with a dominant cover of residue and no crop present. Images (<b>A</b>,<b>B</b>,<b>D</b>,<b>H</b>) all have vegetation as the dominant cover class.</p>
Full article ">Figure 2
<p>Accuracy metrics for the VGG16 image classifier. The black and grey bars represent the validation and training datasets, respectively. The text indicates the respective metric value for validation, training, and class sample size in parentheses. The training and validation sample sizes are 80% and 20% of the total sample size, respectively. Overall indicates the average metric value for the respective category, weighted by sample size. All three metrics have a range of 0–1, where 1 equals a perfect prediction.</p>
Full article ">Figure 3
<p>Accuracy metrics for the classifications after post-processing. The black and grey bars represent the validation and training datasets, respectively. The text indicates the respective metric value for validation, training, and class sample size in parentheses. The training and validation sample sizes are 80% and 20% of the total sample size, respectively. Overall indicates the average metric value for the respective category, weighted by sample size. All three metrics have a range of 0–1, where 1 equals a perfect prediction. Differences between this and <a href="#remotesensing-14-00286-f002" class="html-fig">Figure 2</a> is the exclusion of the blurry and unknown plant classes, with total sample sizes reflecting this.</p>
Full article ">Figure 4
<p>Classification results for the arsmorris2 site for the year 2020. The panels represent the results for the Dominant Cover (<b>A</b>), Crop Type (<b>B</b>), and Crop Status (<b>C</b>) categories. The top two rows of each panel represent the final classification for either the daily maximum probability (MaxP) or the hidden Markov model (HMM). The remaining rows in each panel represent the initial model classification for the respective class, where larger sizes represent higher probability.</p>
Full article ">Figure 5
<p>Classification results for the bouldinalfalfa site for the year 2018. The panels represent the results for the Dominant Cover (<b>A</b>), Crop Type (<b>B</b>), and Crop Status (<b>C</b>) categories. The top two rows of each panel represent the final classification for either the daily maximum probability (MaxP) or the hidden Markov model (HMM). The remaining rows in each panel represent the initial model classification for the respective class, where larger sizes represent higher probability.</p>
Full article ">Figure 6
<p>Classification results for the cafcookeastltar01 site for the year 2018. The panels represent the results for the Dominant Cover (<b>A</b>), Crop Type (<b>B</b>), and Crop Status (<b>C</b>) categories. The top two rows of each panel represent the final classification for either the daily maximum probability (MaxP) or the hidden Markov model (HMM). The remaining rows in each panel represent the initial model classification for the respective class, where larger sizes represent higher probability.</p>
Full article ">Figure 7
<p>Classification results for the NEON.D06.KONA.DP1.00042 site for the year 2017. The panels represent the results for the Dominant Cover (<b>A</b>), Crop Type (<b>B</b>), and Crop Status (<b>C</b>) categories. The top two rows of each panel represent the final classification for either the daily maximum probability (MaxP) or the hidden Markov model (HMM). The remaining rows in each panel represent the initial model classification for the respective class, where larger sizes represent higher probability.</p>
Full article ">
16 pages, 4204 KiB  
Article
Integrating Sentinel-1/2 Data and Machine Learning to Map Cotton Fields in Northern Xinjiang, China
by Tao Hu, Yina Hu, Jianquan Dong, Sijing Qiu and Jian Peng
Remote Sens. 2021, 13(23), 4819; https://doi.org/10.3390/rs13234819 - 27 Nov 2021
Cited by 10 | Viewed by 3507
Abstract
Timely and accurate information of cotton planting areas is essential for monitoring and managing cotton fields. However, there is no large-scale and high-resolution method suitable for mapping cotton fields, and the problems associated with low resolution and poor timeliness need to be solved. [...] Read more.
Timely and accurate information of cotton planting areas is essential for monitoring and managing cotton fields. However, there is no large-scale and high-resolution method suitable for mapping cotton fields, and the problems associated with low resolution and poor timeliness need to be solved. Here, we proposed a new framework for mapping cotton fields based on Sentinel-1/2 data for different phenological periods, random forest classifiers, and the multi-scale image segmentation method. A cotton field map for 2019 at a spatial resolution of 10 m was generated for northern Xinjiang, a dominant cotton planting region in China. The overall accuracy and kappa coefficient of the map were 0.932 and 0.813, respectively. The results showed that the boll opening stage was the best phenological phase for mapping cotton fields and the cotton fields was identified most accurately at the early boll opening stage, about 40 days before harvest. Additionally, Sentinel-1 and the red edge bands in Sentinel-2 are important for cotton field mapping, and there is great potential for the fusion of optical images and microwave images in crop mapping. This study provides an effective approach for high-resolution and high-accuracy cotton field mapping, which is vital for sustainable monitoring and management of cotton planting. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Geographical location and ground samples of northern Xinjiang, China.</p>
Full article ">Figure 2
<p>Phenology calendars for the major crops in northern Xinjiang in 2019.</p>
Full article ">Figure 3
<p>Framework for mapping cotton fields.</p>
Full article ">Figure 4
<p>Relationship between the number of decision trees and the overall accuracy, as well as kappa coefficient.</p>
Full article ">Figure 5
<p>Spatial distribution of cotton fields in northern Xinjiang (<b>a</b>). Some major cotton planting areas are shown in (<b>b</b>) Shawan County, (<b>c</b>) Karamay District, (<b>d</b>) Kuitun City, and (<b>e</b>) Wusu City.</p>
Full article ">Figure 6
<p>Gini importance contrast of different classification features in each month.</p>
Full article ">
16 pages, 4014 KiB  
Article
High Resolution Distribution Dataset of Double-Season Paddy Rice in China
by Baihong Pan, Yi Zheng, Ruoque Shen, Tao Ye, Wenzhi Zhao, Jie Dong, Hanqing Ma and Wenping Yuan
Remote Sens. 2021, 13(22), 4609; https://doi.org/10.3390/rs13224609 - 16 Nov 2021
Cited by 49 | Viewed by 6477
Abstract
Although China is the largest producer of rice, accounting for about 25% of global production, there are no high-resolution maps of paddy rice covering the entire country. Using time-weighted dynamic time warping (TWDTW), this study developed a pixel- and phenology-based method to identify [...] Read more.
Although China is the largest producer of rice, accounting for about 25% of global production, there are no high-resolution maps of paddy rice covering the entire country. Using time-weighted dynamic time warping (TWDTW), this study developed a pixel- and phenology-based method to identify planting areas of double-season paddy rice in China, by comparing temporal variations of synthetic aperture radar (SAR) signals of unknown pixels to those of known double-season paddy rice fields. We conducted a comprehensive evaluation of the method’s performance at pixel and regional scales. Based on 145,210 field surveyed samples from 2018 to 2020, the producer’s and user’s accuracy are 88.49% and 87.02%, respectively. Compared to county-level statistical data from 2016 to 2019, the relative mean absolute errors are 34.11%. This study produced distribution maps of double-season rice at 10 m spatial resolution from 2016 to 2020 over nine provinces in South China, which account for more than 99% of the planting areas of double-season paddy rice of China. The maps are expected to contribute to timely monitoring and evaluating rice growth and yield. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study area spans nine provinces over China (the yellow region). The solid black lines represent the boundary of the provinces. The red triangles indicate 25 unmanned aerial vehicle (UAV) field survey sites in 2018, and each site covers 1 km<sup>2</sup> field area. The blue dots indicate GPS survey points from 2018 to 2020.</p>
Full article ">Figure 2
<p>The temporal series of satellite-based VH at a double-season paddy rice sample site. VH: Dual-band cross-polarization vertical transmit/horizontal receive backscatter coefficient of SAR.</p>
Full article ">Figure 3
<p>Rice cropping calendar and Sentinel-1 image acquisition dates at Guangdong, Guangxi, Hunan, Jiangxi, Fujian provinces, Hubei, Anhui, Zhejiang provinces (<b>a</b>) and Hainan province (<b>b</b>).</p>
Full article ">Figure 4
<p>The temporal series of VH at a double-season paddy rice pixel (<b>a</b>) and red curve means a standard VH curve covering five times (60 days). The temporal series of VH at an unknown pixel applied with three moving windows respectively (<b>b</b>–<b>d</b>) and green curve means an extracted unknown curve covering five times (60 days).</p>
Full article ">Figure 5
<p>Distribution map of early rice (<b>a</b>) and late rice (<b>b</b>) in 2018.</p>
Full article ">Figure 6
<p>County-level comparison of identified and statistical planting areas from 2016–2019, for early rice (<b>a</b>–<b>d</b>) and late rice (<b>e</b>–<b>h</b>).</p>
Full article ">Figure 7
<p>The comparison between identified planting areas of early rice and agricultural census areas at county-level for 2016–2019 in all nine investigated provinces.</p>
Full article ">Figure 8
<p>The comparison between identified planting areas of late rice and agricultural census areas at county-level for 2016–2019 in all nine investigated provinces.</p>
Full article ">Figure 9
<p>Statistics for patches with different pixel numbers in the distribution map of early rice (<b>a</b>) and late rice (<b>b</b>) in 2018.</p>
Full article ">Figure 10
<p>Times of Sentinel-1 original observations from February to December during 2016 (<b>a</b>) to 2020 (<b>e</b>).</p>
Full article ">
25 pages, 14079 KiB  
Article
Evaluating the Farmland Use Intensity and Its Patterns in a Farming—Pastoral Ecotone of Northern China
by Xin Chen, Guoliang Zhang, Yuling Jin, Sicheng Mao, Kati Laakso, Arturo Sanchez-Azofeifa, Li Jiang, Yi Zhou, Haile Zhao, Le Yu, Rui Jiang, Zhihua Pan and Pingli An
Remote Sens. 2021, 13(21), 4304; https://doi.org/10.3390/rs13214304 - 26 Oct 2021
Cited by 3 | Viewed by 3172
Abstract
The growing population and northward shifts in the center of grain production collectively contribute to the arising farmland use intensity of the farming–pastoral ecotone of Northern China (FPENC). Consequently, it poses a great threat to the vulnerable ecosystem of FPENC. Thus, farmland use [...] Read more.
The growing population and northward shifts in the center of grain production collectively contribute to the arising farmland use intensity of the farming–pastoral ecotone of Northern China (FPENC). Consequently, it poses a great threat to the vulnerable ecosystem of FPENC. Thus, farmland use intensity monitoring is a top priority to practice sustainable farming. In this study, we establish an indicator system designed to evaluate farmland use intensity in Ulanqab, located in the central part of FPENC. This system includes three single-year indicators (the degree of coupling between effective rainfall and crop water requirement (Dcrr), irrigation intensity (Iri) and crop duration (Cd)) and two multi-year indicators (the frequency of adopting the green-depressing cropping system (Gf) and rotation frequency (Rf)). We mapped five farmland use intensity indicators in Ulanqab from 2010 to 2019 using satellite imagery and other ancillary data. Then, the farmland use patterns were recognized by applying the self-organizing map algorithm. Our results suggest that the mapping results of crop types, center pivot irrigation (CPI), and irrigated areas are reasonably accurate. Iri, Cd, and Rf experienced an increase of 31 m3/hm2, 1 day, and 0.06 in Ulanqab from 2010 to 2019, respectively, while Dcrr and Gf witnessed a decrease of 0.002 and 0.004, respectively. That is, farmers are progressively inclined to higher farmland use intensity. Moreover, spatial heterogeneity analysis shows that Northern Ulanqab owned higher Dcrr, Iri, Cd, and Rf, and lower Gf than the southern part. We conclude the paper by discussing the implications of the results for areas with different farmland use intensity patterns. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of Ulanqab. (<b>a</b>) Location of FPENC in China; (<b>b</b>) location of Ulanqab in FPENC; (<b>c</b>) administrative units of Ulanqab, where the regions filled with diagonal gray lines belong to the Qianshan area, and the solid gray regions are the Houshan area. The base layers in (<b>a</b>–<b>c</b>) denote the newly released world ocean base map provided in ArcGIS 10.2.</p>
Full article ">Figure 2
<p>Overall technical workflow of this study. (<b>a</b>) Aspects that affect farmland use intensity; (<b>b</b>) assessment index system of farmland use intensity; (<b>c</b>) mapping methods of crop types, GDCS fields, irrigated land, and CPI; (<b>d</b>) mapping method of farmland use patterns.</p>
Full article ">Figure 3
<p>(<b>a</b>) Distribution for crops and the inactive GDCS fields in Ulanqab for 2010–2019; (<b>b</b>) distribution for center pivot irrigation in Ulanqab for 2010–2019; (<b>c</b>) distribution for irrigated land and rainfed land in Ulanqab for 2012 and 2017.</p>
Full article ">Figure 4
<p>Scatter plots of predicted areas versus observed areas at the county level for 2010–2018 (<b>a</b>–<b>i</b>) and four crops (<b>j</b>–<b>m</b>).</p>
Full article ">Figure 5
<p>The monthly <span class="html-italic">Dcrr</span> (<b>a</b>–<b>d, e</b>–<b>i</b>) and <span class="html-italic">Dcrr</span> during the whole growing season (<b>e</b>,<b>j</b>) of main crops in the Houshan area (<b>a</b>–<b>e</b>) and the Qianshan area (<b>f</b>–<b>j</b>) from 2000–2019.</p>
Full article ">Figure 6
<p>Spatio-temporal patterns of <span class="html-italic">Dcrr</span> for Ulanqab from 2010 to 2019. (<b>a</b>–<b>c</b>) Trends in <span class="html-italic">Dcrr</span> of the Ulanqab, the Houshan area, and the Qianshan area from 2010–2019, respectively; (<b>d</b>,<b>g</b>) the trends in <span class="html-italic">Dcrr</span> at county level and town level from 2010–2019, respectively; (<b>e</b>) one-way ANOVA of <span class="html-italic">Dcrr</span> between the Qianshan group and Houshan group in 2019; (<b>f</b>) one-way ANOVA of <span class="html-italic">Dcrr</span> change trend (slope) between the Qianshan and Houshan groups from 2010–2019. *** <span class="html-italic">p</span> &lt; 0.01, n.s., not significant.</p>
Full article ">Figure 7
<p>Results of the hotspot analysis for the indicators. (<b>a</b>) <span class="html-italic">Dcrr</span>, (<b>b</b>) <span class="html-italic">Iri</span>, (<b>c</b>) <span class="html-italic">Cd</span>, (<b>d</b>) <span class="html-italic">Gf</span>, and (<b>e</b>) <span class="html-italic">Rf</span>.</p>
Full article ">Figure 8
<p>Spatio-temporal patterns of <span class="html-italic">Iri</span> for Ulanqab from 2010 to 2019. (<b>a</b>–<b>c</b>) Trends in <span class="html-italic">Iri</span> of the Ulanqab, the Houshan area, and the Qianshan area from 2010–2019, respectively; (<b>d</b>,<b>g</b>) are the trends in <span class="html-italic">Iri</span> at county and town levels from 2010–2019, respectively; (<b>e</b>) one-way ANOVA of <span class="html-italic">Iri</span> between the Qianshan and Houshan groups in 2019; (<b>f</b>) one-way ANOVA of the rate of change (<span class="html-italic">roc</span>) in <span class="html-italic">Iri</span> between the Qianshan and Houshan groups from 2010–2019. * <span class="html-italic">p</span> &lt; 0.1, n.s., not significant.</p>
Full article ">Figure 9
<p>Spatio-temporal patterns of <span class="html-italic">Cd</span> for Ulanqab from 2010 to 2019. (<b>a</b>–<b>c</b>) Trends in <span class="html-italic">Cd</span> of the Ulanqab, the Houshan area, and the Qianshan area from 2010–2019, respectively; (<b>d</b>,<b>g</b>) trends in <span class="html-italic">Cd</span> at county and town levels from 2010–2019, respectively; (<b>e</b>) one-way ANOVA of <span class="html-italic">Cd</span> between the Qianshan and Houshan groups in 2019; (<b>f</b>) is the one-way ANOVA of <span class="html-italic">Cd</span> change trend (<span class="html-italic">slope</span>) between the Qianshan and Houshan groups from 2010–2019. n.s., not significant.</p>
Full article ">Figure 10
<p>Spatio-temporal patterns of <span class="html-italic">Gf</span> for Ulanqab from 2010 to 2019. (<b>a</b>,<b>b</b>) Pixel-wise Gf maps of Ulanqab for 2010–2014 and 2015–2019, respectively; (<b>c</b>,<b>d</b>) <span class="html-italic">Gf</span> at town level for 2010–2014 and 2015–2019, respectively; (<b>e</b>) rate of change (<span class="html-italic">roc</span>) in <span class="html-italic">Gf</span> at town level; (<b>f</b>,<b>g</b>) one-way ANOVA of <span class="html-italic">Gf</span> between the Qianshan and Houshan groups for 2010–2014 and 2015–2019, respectively; (<b>h</b>) one-way ANOVA of <span class="html-italic">Gf</span> roc between the Qianshan and Houshan groups from 2010–2019. *** <span class="html-italic">p</span> &lt; 0.01.</p>
Full article ">Figure 11
<p>Spatio-temporal patterns of <span class="html-italic">Rf</span> for Ulanqab from 2010 to 2019. (<b>a</b>,<b>b</b>) Pixel-wise <span class="html-italic">Rf</span> maps of Ulanqab for 2010–2014 and 2015–2019, respectively; (<b>c</b>,<b>d</b>) <span class="html-italic">Rf</span> at town level for 2010–2014 and 2015–2019, respectively; (<b>e</b>) rate of change (roc) in <span class="html-italic">Rf</span> at town level; (<b>f</b>,<b>g</b>) are the one-way ANOVA of <span class="html-italic">Rf</span> between the Qianshan and Houshan groups for 2010–2014 and 2015–2019, respectively; (<b>h</b>) one-way ANOVA of <span class="html-italic">Rf roc</span> between the Qianshan and Houshan groups from 2010–2019. ** <span class="html-italic">p</span> &lt; 0.05, *** <span class="html-italic">p</span> &lt; 0.01, n.s., not significant.</p>
Full article ">Figure 12
<p>(<b>a</b>) Training progress of a SOM neural network; (<b>b</b>) codes plot, which shows the classification of each node; (<b>c</b>) the heat map of the node counts, which shows the number of training inputs associated with each code; (<b>d</b>) mapping plot, which shows the values of each node.</p>
Full article ">Figure 13
<p>(<b>a</b>) Z-scores for each indicator describing each cluster; (<b>b</b>) SOM cluster of farmland use intensity indicators for each township in Ulanqab. The numbers in parentheses show the number of towns included in each cluster.</p>
Full article ">Figure A1
<p>The monthly <span class="html-italic">ETc</span> (<b>a</b>–<b>j</b>) and total <span class="html-italic">ETc</span> (<b>e</b>,<b>j</b>) during the whole growing season of main crops in the Houshan area (<b>a</b>–<b>e</b>) and the Qianshan area (<b>f</b>–<b>j</b>) from 2000–2019.</p>
Full article ">Figure A2
<p>The monthly <span class="html-italic">Pe</span> in the Houshan area (Siziwang Banner and Huade stations) and the Qianshan area (Jining station) during the whole growing season from 2000–2019.</p>
Full article ">
16 pages, 2717 KiB  
Article
USA Crop Yield Estimation with MODIS NDVI: Are Remotely Sensed Models Better than Simple Trend Analyses?
by David M. Johnson, Arthur Rosales, Richard Mueller, Curt Reynolds, Ronald Frantz, Assaf Anyamba, Ed Pak and Compton Tucker
Remote Sens. 2021, 13(21), 4227; https://doi.org/10.3390/rs13214227 - 21 Oct 2021
Cited by 38 | Viewed by 7435
Abstract
Crop yield forecasting is performed monthly during the growing season by the United States Department of Agriculture’s National Agricultural Statistics Service. The underpinnings are long-established probability surveys reliant on farmers’ feedback in parallel with biophysical measurements. Over the last decade though, satellite imagery [...] Read more.
Crop yield forecasting is performed monthly during the growing season by the United States Department of Agriculture’s National Agricultural Statistics Service. The underpinnings are long-established probability surveys reliant on farmers’ feedback in parallel with biophysical measurements. Over the last decade though, satellite imagery from the Moderate Resolution Imaging Spectroradiometer (MODIS) has been used to corroborate the survey information. This is facilitated through the Global Inventory Modeling and Mapping Studies/Global Agricultural Monitoring system, which provides open access to pertinent real-time normalized difference vegetation index (NDVI) data. Hence, two relatively straightforward MODIS-based modeling methods are employed operationally. The first model constitutes mid-season timing based on the maximum peak NDVI value, while the second is reflective of late-season timing by integrating accumulated NDVI over a threshold value. Corn model results nationally show the peak NDVI method provides a R2 of 0.88 and a coefficient of variation (CV) of 3.5%. The accumulated method, using an optimally derived 0.58 NDVI threshold, improves the performance to 0.93 and 2.7%, respectively. Both these models outperform simple trend analysis, which is 0.48 and 7.4%, correspondingly. For soybeans the R2 results of the peak NDVI model are 0.62, and 0.73 for the accumulated using a 0.56 threshold. CVs are 6.8% and 5.7%, respectively. Spring wheat’s R2 performance with the accumulated NDVI model is 0.60 but just 0.40 with peak NDVI. The soybean and spring wheat models perform similarly to trend analysis. Winter wheat and upland cotton show poor model performance, regardless of method. Ultimately, corn yield forecasting derived from MODIS imagery is robust, and there are circumstances when forecasts for soybeans and spring wheat have merit too. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Study area. USA states in dark grey represent those that were also focused on for state-level yield assessment in addition to national-level. Crops shown are from the 2020 USDA NASS Cropland Data Layer.</p>
Full article ">Figure 2
<p>Average NDVI signal over corn area of the USA showing years with highest (2020: yellow), lowest (2012: orange), earliest (2018: green), and latest (2019: blue) profiles.</p>
Full article ">Figure 3
<p>R<sup>2</sup> optimization versus threshold in the accumulated NDVI methodology. The left side of the chart was bounded by 0.3 as that was the point at which NDVI typically reaches a minimum off season. The right end of each line represents the minimum NDVI maximum that occurred during the period.</p>
Full article ">Figure 4
<p>Relationships of USA-level crop yield versus year, peak NDVI, and accumulated NDVI (crop by row: (<b>a</b>). corn, (<b>b</b>). soybeans, (<b>c</b>). spring wheat, (<b>d</b>). winter wheat, (<b>e</b>). cotton; model by column: <b>i</b>. year, <b>ii</b>. peak NDVI, <b>iii</b>. accumulated NDVI). The LSR line is in dotted red with the corresponding R<sup>2</sup>, SE, and CV values shown in <a href="#remotesensing-13-04227-t001" class="html-table">Table 1</a>.</p>
Full article ">Figure 5
<p>MODIS NDVI estimated 2020 corn yields over the Central USA Corn Belt based on accumulated over 0.58 NDVI methodology (1 corn bu/ac = 0.0628 mt/ha).</p>
Full article ">
21 pages, 2853 KiB  
Article
Cereal Yield Forecasting with Satellite Drought-Based Indices, Weather Data and Regional Climate Indices Using Machine Learning in Morocco
by El houssaine Bouras, Lionel Jarlan, Salah Er-Raki, Riad Balaghi, Abdelhakim Amazirh, Bastien Richard and Saïd Khabba
Remote Sens. 2021, 13(16), 3101; https://doi.org/10.3390/rs13163101 - 6 Aug 2021
Cited by 62 | Viewed by 7975
Abstract
Accurate seasonal forecasting of cereal yields is an important decision support tool for countries, such as Morocco, that are not self-sufficient in order to predict, as early as possible, importation needs. This study aims to develop an early forecasting model of cereal yields [...] Read more.
Accurate seasonal forecasting of cereal yields is an important decision support tool for countries, such as Morocco, that are not self-sufficient in order to predict, as early as possible, importation needs. This study aims to develop an early forecasting model of cereal yields (soft wheat, barley and durum wheat) at the scale of the agricultural province considering the 15 most productive over 2000–2017 (i.e., 15 × 18 = 270 yields values). To this objective, we built on previous works that showed a tight linkage between cereal yields and various datasets including weather data (rainfall and air temperature), regional climate indices (North Atlantic Oscillation in particular), and drought indices derived from satellite observations in different wavelengths. The combination of the latter three data sets is assessed to predict cereal yields using linear (Multiple Linear Regression, MLR) and non-linear (Support Vector Machine, SVM; Random Forest, RF, and eXtreme Gradient Boost, XGBoost) machine learning algorithms. The calibration of the algorithmic parameters of the different approaches are carried out using a 5-fold cross validation technique and a leave-one-out method is implemented for model validation. The statistical metrics of the models are first analyzed as a function of the input datasets that are used, and as a function of the lead times, from 4 months to 2 months before harvest. The results show that combining data from multiple sources outperformed models based on one dataset only. In addition, the satellite drought indices are a major source of information for cereal prediction when the forecasting is carried out close to harvest (2 months before), while weather data and, to a lesser extent, climate indices, are key variables for earlier predictions. The best models can accurately predict yield in January (4 months before harvest) with an R2 = 0.88 and RMSE around 0.22 t. ha?1. The XGBoost method exhibited the best metrics. Finally, training a specific model separately for each group of provinces, instead of one global model, improved the prediction performance by reducing the RMSE by 10% to 35% depending on the provinces. In conclusion, the results of this study pointed out that combining remote sensing drought indices with climate and weather variables using a machine learning technique is a promising approach for cereal yield forecasting. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The study areas with the 15 provinces.</p>
Full article ">Figure 2
<p>Schematic diagram presenting an overview of the main inputs data and the methodology proposed in this study.</p>
Full article ">Figure 3
<p>Model performance (R<sup>2</sup> -line- and RMSE -bar-) as a function of the lead time from 4 to 2 months before havers (from January to March) for the four methods (MLR, SVM, RF and XGBoost). All the available predictor variables at the time of prediction were used (see <a href="#remotesensing-13-03101-t002" class="html-table">Table 2</a>).</p>
Full article ">Figure 4
<p>Importance of the different inputs datasets for yield prediction from January (<b>a</b>); February (<b>b</b>) and March (<b>c</b>). The considered predictors variables and their time span of the year for each model are reported in <a href="#remotesensing-13-03101-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 5
<p>Average of observed and predicted yields using the “leave-one-year-out” technique at (<b>a</b>) January, (<b>b</b>) February and (<b>c</b>) March.</p>
Full article ">Figure 6
<p>RMSE of global model at (<b>a</b>) January, (<b>b</b>) February, (<b>c</b>) March and regional models at (<b>e</b>) January, (<b>f</b>) February and (<b>g</b>) March.</p>
Full article ">
20 pages, 10302 KiB  
Article
Combining Spectral and Texture Features of UAV Images for the Remote Estimation of Rice LAI throughout the Entire Growing Season
by Kaili Yang, Yan Gong, Shenghui Fang, Bo Duan, Ningge Yuan, Yi Peng, Xianting Wu and Renshan Zhu
Remote Sens. 2021, 13(15), 3001; https://doi.org/10.3390/rs13153001 - 30 Jul 2021
Cited by 47 | Viewed by 4860
Abstract
Leaf area index (LAI) estimation is very important, and not only for canopy structure analysis and yield prediction. The unmanned aerial vehicle (UAV) serves as a promising solution for LAI estimation due to its great applicability and flexibility. At present, vegetation index (VI) [...] Read more.
Leaf area index (LAI) estimation is very important, and not only for canopy structure analysis and yield prediction. The unmanned aerial vehicle (UAV) serves as a promising solution for LAI estimation due to its great applicability and flexibility. At present, vegetation index (VI) is still the most widely used method in LAI estimation because of its fast speed and simple calculation. However, VI only reflects the spectral information and ignores the texture information of images, so it is difficult to adapt to the unique and complex morphological changes of rice in different growth stages. In this study we put forward a novel method by combining the texture information derived from the local binary pattern and variance features (LBP and VAR) with the spectral information based on VI to improve the estimation accuracy of rice LAI throughout the entire growing season. The multitemporal images of two study areas located in Hainan and Hubei were acquired by a 12-band camera, and the main typical bands for constituting VIs such as green, red, red edge, and near-infrared were selected to analyze their changes in spectrum and texture during the entire growing season. After the mathematical combination of plot-level spectrum and texture values, new indices were constructed to estimate rice LAI. Comparing the corresponding VI, the new indices were all less sensitive to the appearance of panicles and slightly weakened the saturation issue. The coefficient of determination (R2) can be improved for all tested VIs throughout the entire growing season. The results showed that the combination of spectral and texture features exhibited a better predictive ability than VI for estimating rice LAI. This method only utilized the texture and spectral information of the UAV image itself, which is fast, easy to operate, does not need manual intervention, and can be a low-cost method for monitoring crop growth. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Study Area. (<b>a</b>) is the study area of Experiment 1 conducted in Hainan in 2018. (<b>b</b>) is the study area of Experiment 2 conducted in Hubei in 2019.</p>
Full article ">Figure 2
<p>Reflectance variation of different objects extracted on the reflectance image. (<b>a</b>) is the reflectance of leaf and soil before heading. (<b>b</b>) is the reflectance of leaf and panicle after heading.</p>
Full article ">Figure 3
<p>LBP patterns representing local features of spot, line, edge, and flat.</p>
Full article ">Figure 4
<p>Image processing for calculating LV-VIs. R represents reflectance.</p>
Full article ">Figure 5
<p>The variation of LAI plotted against vegetation indices: (<b>a</b>) CI<sub>green</sub>, (<b>b</b>) RVI, (<b>c</b>) CI<sub>red edge</sub>, (<b>d</b>) MTCI, (<b>e</b>) GNDVI, (<b>f</b>) NDVI, (<b>g</b>) NDRE and (<b>h</b>) EVI2 during the entire growing season. Pre-HD and Post-HD denote pre-heading stages and post-heading stages, respectively. For all tested indices, the samples after heading were deviated from the LAI vs. VI relationship before heading.</p>
Full article ">Figure 6
<p>Rice LAI plotted against the reflectance of (<b>a</b>) 550 nm, (<b>b</b>) 670 nm, (<b>c</b>) 720 nm, (<b>d</b>) 800 nm, the LBP<sub>riu2</sub> × VAR calculated based on the bands of (<b>e</b>) 550 nm, (<b>f</b>) 670 nm, (<b>g</b>) 720 nm, (<b>h</b>) 800 nm, and the LV-R of (<b>i</b>) 550 nm, (<b>j</b>) 670 nm, (<b>k</b>) 720 nm, (<b>l</b>) 800 nm during the entire growth season. When the horizontal axis of LBP<sub>riu2</sub> × VAR was expressed in exponential form, it was the opposite of the trend of LAI vs. reflectance. The LV-R index combining R and LBP<sub>riu2</sub> × VAR had a smaller separation than R between pre-heading stages (Pre-HD) and post-heading stages (Post-HD).</p>
Full article ">Figure 7
<p>Two models of VI and LV-VI were tested for rice LAI estimation throughout the entire growing season on (<b>a</b>) CI<sub>red edge</sub>, (<b>b</b>) GNDVI, and (<b>c</b>) MTCI.</p>
Full article ">Figure 8
<p>The coefficients of variance (CV) for LAI estimation using VI and LV-VI, respectively.</p>
Full article ">Figure 9
<p>The relationship between Estimated LAI and Measured LAI based on (<b>a</b>) LV-CI<sub>green</sub>, (<b>b</b>) LV-EVI2, and (<b>c</b>) LV-NDRE.</p>
Full article ">Figure 10
<p>Image reflectance and texture changes of two plots at (<b>a</b>) 550 nm, (<b>b</b>) 670 nm, (<b>c</b>) 720 nm, and (<b>d</b>) 800 nm on the 23rd, 47th, 63rd, and 85th days after transplanting (DAT). R represents reflectance image. LBP<sub>riu2</sub>, VAR, LBP<sub>riu2</sub> × VAR, −ln(LBP<sub>riu2</sub> × VAR), and LV-R represent the corresponding images calculated by these methods.</p>
Full article ">Figure 10 Cont.
<p>Image reflectance and texture changes of two plots at (<b>a</b>) 550 nm, (<b>b</b>) 670 nm, (<b>c</b>) 720 nm, and (<b>d</b>) 800 nm on the 23rd, 47th, 63rd, and 85th days after transplanting (DAT). R represents reflectance image. LBP<sub>riu2</sub>, VAR, LBP<sub>riu2</sub> × VAR, −ln(LBP<sub>riu2</sub> × VAR), and LV-R represent the corresponding images calculated by these methods.</p>
Full article ">Figure 11
<p>Temporal behaviors of (<b>a</b>) the reflectance ratio of visible and NIR and (<b>b</b>) the −ln(LBP<sub>riu2</sub> × VAR) ratio of visible and NIR during the entire rice growing season. −ln(LBP<sub>riu2</sub> × VAR) showed the opposite trend to R. The relatively high value of the logarithm index could narrow the difference of reflectance in the tillering stage, while the low value after heading could widen it instead, thus reducing the saturation and hysteresis effects.</p>
Full article ">
22 pages, 5111 KiB  
Article
Mapping Maize Area in Heterogeneous Agricultural Landscape with Multi-Temporal Sentinel-1 and Sentinel-2 Images Based on Random Forest
by Yansi Chen, Jinliang Hou, Chunlin Huang, Ying Zhang and Xianghua Li
Remote Sens. 2021, 13(15), 2988; https://doi.org/10.3390/rs13152988 - 29 Jul 2021
Cited by 30 | Viewed by 4780
Abstract
Accurate estimation of crop area is essential to adjusting the regional crop planting structure and the rational planning of water resources. However, it is quite challenging to map crops accurately by high-resolution remote sensing images because of the ecological gradient and ecological convergence [...] Read more.
Accurate estimation of crop area is essential to adjusting the regional crop planting structure and the rational planning of water resources. However, it is quite challenging to map crops accurately by high-resolution remote sensing images because of the ecological gradient and ecological convergence between crops and non-crops. The purpose of this study is to explore the combining application of high-resolution multi-temporal Sentinel-1 (S1) radar backscatter and Sentinel-2 (S2) optical reflectance images for maize mapping in highly complex and heterogeneous landscapes in the middle reaches of Heihe River, northwest China. We proposed a new two-step method of vegetation extraction and followed by maize extraction, that is, extract the vegetation-covered areas first to reduce the inter-class variance by using a Random Forest (RF) classifier based on S2 data, and then extract the maize distribution in the vegetation area by using another RF classifier based on S1 and/or S2 data. The results demonstrate that the vegetation extraction classifier successfully identified vegetation-covered regions with an overall accuracy above 96% in the study area, and the accuracy of the maize extraction classifier constructed by the combined multi-temporal S1 and S2 images is significantly improved compared with that S1 (alone) or S2 (alone), with an overall accuracy of 87.63%, F1_Score of 0.86, and Kappa coefficient of 0.75. In addition, with the introduction of multi-temporal S1 and/or S2 images in crop growing season, the constructed RF model is more beneficial to maize mapping. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Study area of the middle reaches of Heihe River showing (<b>a</b>) location of the study area in China, (<b>b</b>) location of the study area in Heihe River basin, (<b>c</b>) Sentinel-2 composite image in August of false-color composite.</p>
Full article ">Figure 2
<p>Technical flow of the maize planting area extraction scheme. “k = k − 5” represents the elimination of 5 smallest importance features in each iteration process. It includes three parts: (1) Data preparation and preprocessing; (2) Extraction of vegetation area using S2 images; (3) Extraction of corn planting area using S1(alone), S2(alone) and combined dataset, respectively.</p>
Full article ">Figure 3
<p>The vegetation extraction result in the study area. “Yellow” represents the non-vegetation area, “purple” represents the vegetation area, and then the maize planting area will be further extracted in the vegetation area.</p>
Full article ">Figure 4
<p>The importance of 30 input indicators in vegetation extraction. Note that the values of importance of variables have been normalized. The change from blue to red indicates the increasing trend of the importance of variables.</p>
Full article ">Figure 5
<p>The feature indicators that used to construct three different RF maize extraction models of RF_S1, RF_S2, and RF_ S1&amp;S2, respectively. (<b>a</b>) the 24 feature indicators from S1 (alone); (<b>b</b>) the 30 optimal feature indicators from S2 (alone); (<b>c</b>) the optimal 40 feature indicators from combined S1 and S2 data.</p>
Full article ">Figure 5 Cont.
<p>The feature indicators that used to construct three different RF maize extraction models of RF_S1, RF_S2, and RF_ S1&amp;S2, respectively. (<b>a</b>) the 24 feature indicators from S1 (alone); (<b>b</b>) the 30 optimal feature indicators from S2 (alone); (<b>c</b>) the optimal 40 feature indicators from combined S1 and S2 data.</p>
Full article ">Figure 6
<p>(<b>a</b>) Distribution map of maize planting area in the study area; (<b>b</b>) The false color composite map of a local enlarged typical desert-oasis zone; and (<b>c</b>) maize distribution map of the typical zone.</p>
Full article ">Figure 7
<p>The number of good observations in the study area from April to September. Banded areas had a higher value due to the more S2 images are overlapped. The color in the graph represents the amount of data available on the pixel during the time interval. From “blue” to “red” indicates that the number of good observations increases gradually.</p>
Full article ">Figure 8
<p>Performance comparison of different RF models constructed by incremental learning and feature selection. “noFSP” means that the feature selection procedure is not used. In the figure, the month marked on the abscissa means that the input features of the RF model include the data of that month and previous months.</p>
Full article ">Figure 8 Cont.
<p>Performance comparison of different RF models constructed by incremental learning and feature selection. “noFSP” means that the feature selection procedure is not used. In the figure, the month marked on the abscissa means that the input features of the RF model include the data of that month and previous months.</p>
Full article ">
23 pages, 10099 KiB  
Article
Quantifying Effects of Excess Water Stress at Early Soybean Growth Stages Using Unmanned Aerial Systems
by Stuart D. Smith, Laura C. Bowling, Katy M. Rainey and Keith A. Cherkauer
Remote Sens. 2021, 13(15), 2911; https://doi.org/10.3390/rs13152911 - 24 Jul 2021
Cited by 6 | Viewed by 2820
Abstract
Low-gradient agricultural areas prone to in-field flooding impact crop development and yield potential, resulting in financial losses. Early identification of the potential reduction in yield from excess water stress at the plot scale provides stakeholders with the high-throughput information needed to assess risk [...] Read more.
Low-gradient agricultural areas prone to in-field flooding impact crop development and yield potential, resulting in financial losses. Early identification of the potential reduction in yield from excess water stress at the plot scale provides stakeholders with the high-throughput information needed to assess risk and make responsive economic management decisions as well as future investments. The objective of this study is to analyze and evaluate the application of proximal remote sensing from unmanned aerial systems (UAS) to detect excess water stress in soybean and predict the potential reduction in yield due to this excess water stress. A high-throughput data processing pipeline is developed to analyze multispectral images captured at the early development stages (R4–R5) from a low-cost UAS over two radiation use efficiency experiments in West–Central Indiana, USA. Above-ground biomass is estimated remotely to assess the soybean development by considering soybean genotype classes (High Yielding, High Yielding under Drought, Diversity, all classes) and transferring estimated parameters to a replicate experiment. Digital terrain analysis using the Topographic Wetness Index (TWI) is used to objectively compare plots more susceptible to inundation with replicate plots less susceptible to inundation. The results of the study indicate that proximal remote sensing estimates above-ground biomass at the R4–R5 stage using adaptable and transferable methods, with a calculated percent bias between 0.8% and 14% and root mean square error between 72 g/m2 and 77 g/m2 across all genetic classes. The estimated biomass is sensitive to excess water stress with distinguishable differences identified between the R4 and R5 development stages; this translates into a reduction in the percent of expected yield corresponding with observations of in-field flooding and high TWI. This study demonstrates transferable methods to estimate yield loss due to excess water stress at the plot level and increased potential to provide crop status assessments to stakeholders prior to harvest using low-cost UAS and a high-throughput data processing pipeline. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Image of inundated land area during different development stages of soybean in an agricultural field in West–Central Indiana: (<b>a</b>) extent and shallow depth of ILA after planting, (<b>b</b>) vegetative stage where the impacts of ILA prevented some plants from developing leaves, and (<b>c</b>) reproductive stage where the impacts of ILA have caused lodging in some plants.</p>
Full article ">Figure 2
<p>Map view of the field experiment, located in West–Central IN. Two areas of interests were analyzed, which are outlined in black and blue. The black outline represents the area of experiment 1, RUE-1. The blue outline is the second experiment, RUE-2. The extent of inundation was mapped using TOPCON Real Time Kinematic (RTK) surveying equipment. Ground control points were used to define extent of experiments.</p>
Full article ">Figure 3
<p>Illustration of components of a defined experiment from Crop Image Extraction and Vegetation Indices Derivation. The user has defined an experiment made of four crop rows and two crop ranges, and each crop plot contains six crop units. The experiment is made up of four crop rows and two crop ranges. CIE extracts replicate plot images from the UAS during a flight over an area of interest. CIE enables the user to define an experiment, and the tool then highlights the canopy, grids the experiment and extracts the replicate plot images from each gridded plot. VID is used to calibrate images and compute vegetation indices of interest.</p>
Full article ">Figure 4
<p>Map of Topographic Wetness Index (TWI) calculated from a 1.5 m resolution DEM at the study location. Lower values shown in brown are less suspectable to in-field flooding. The transition from brown to blue shows an increase in susceptibility of in-field flooding. Mapped inundated land area (ILA) shows agreement with the calculated TWI. Crop Image Extraction (CIE) can be used to extract TWI from plots within a defined experiment. The black circles represent the extracted plots from RUE-1 and RUE-2.</p>
Full article ">Figure 5
<p>Scatter plot of the relationship between Topographic Wetness Index (TWI) and soybean yield (kg/ha) labeled by class. TWI thresholds at 7.4 and 13.5 were set to compare RUE-1 and RUE-2 replicates less likely to experience ILA to those that were more likely to experience ILA. The low and high TWI ranges used in analysis are represented by arrows.</p>
Full article ">Figure 6
<p>Scatter plots comparing estimated biomass with measured biomass at the R4–R5 development stage for each class and all classes in RUE-1 shown as triangles. (<b>a</b>) High Yielding, (<b>b</b>) High Yielding under Drought, (<b>c</b>) Diversity and (<b>d</b>) all classes. Parameters were estimated for each class to consider varying soybean genetics and for all classes to determine if one set of parameters could be representative for all classes.</p>
Full article ">Figure 7
<p>Scatter plots comparing estimated biomass with measured biomass at the R4–R5 development stage for each class and classes in RUE-2 shown as circles. (<b>a</b>) High Yielding, (<b>b</b>) High Yielding under Drought, (<b>c</b>) Diversity and (<b>d</b>) all classes. Parameters were transferred from RUE-1 to RUE-2.</p>
Full article ">Figure 8
<p>Estimated biomass (g/m<sup>2</sup>) of soybean on 17 July 2018 at early growth stages (R4–R5) for experiments in RUE-1 and RUE-2. Outputs generated using CIE and VID. Plots of low estimated biomass values are shown in red and the transition to green represents an increase in estimated biomass. Plots with low estimated biomass correspond with mapped inundated land area.</p>
Full article ">Figure 9
<p>Four pairs of replicate plots in RUE-1 and RUE-2 analyzing the effects of excess water stress on the percent difference in yield with respect to the measured and estimated biomass (g/m<sup>2</sup>) at the R4–R5 stage. Increasing biomass estimate for RUE-1 is plotted to the left, for RUE-2 to the right, with zero biomass in the center of the plot. Decline in yield, reported as a percent difference between RUE-2 and RUE-1, is indicated by vertical position of the bar relative to the RUE-1 replicate. Black circles indicate each observed occurrence of inundated land area for the plot. The biomass was sampled on 16 July 2018 and remotely sensed estimates of biomass were made on 17 July.</p>
Full article ">Figure 10
<p>Analyzing the interactions between estimated biomass, TWI and ∆ TWI at the early growth stage (R4–R5). Values are colored with the difference in TWI between replicate plots, which ranges from 7 to 12.</p>
Full article ">Figure 11
<p>Comparing interactions between percent of expected yield (%) with respect to relative biomass (fraction) and the ∆ estimated biomass (g/m<sup>2</sup>) at the early growth stage (R4–R5).</p>
Full article ">Figure 12
<p>Percent of expected yield (%) from 17 July 2018 at the R4–R5 stage. Values below 90% (red to orange) indicate areas predicted more likely to have a lower yield, whereas values greater than 95% (yellow to green) predict a higher estimate of yield.</p>
Full article ">
17 pages, 4357 KiB  
Article
Oilseed Rape (Brassica napus L.) Phenology Estimation by Averaged Stokes-Related Parameters
by Wangfei Zhang, Yongxin Zhang, Yue Yang and Erxue Chen
Remote Sens. 2021, 13(14), 2652; https://doi.org/10.3390/rs13142652 - 6 Jul 2021
Cited by 7 | Viewed by 2893
Abstract
Accurate and timely knowledge of crop phenology assists in planning and/or triggering appropriate farming activities. The multiple Polarimetric Synthetic Aperture Radar (PolSAR) technique shows great potential in crop phenology retrieval for its characterizations, such as short revisit time, all-weather monitoring and sensitivity to [...] Read more.
Accurate and timely knowledge of crop phenology assists in planning and/or triggering appropriate farming activities. The multiple Polarimetric Synthetic Aperture Radar (PolSAR) technique shows great potential in crop phenology retrieval for its characterizations, such as short revisit time, all-weather monitoring and sensitivity to vegetation structure. This study aims to explore the potential of averaged Stokes-related parameters derived from multiple PolSAR data in oilseed rape phenology identification. In this study, the averaged Stokes-related parameters were first computed by two different wave polarimetric states. Then, the two groups of averaged Stokes-related parameters were generated and applied for analyzing averaged Stokes-related parameter sensitivity to oilseed rape phenology changes. At last, decision tree (DT) algorithms trained using 60% of the data were used for oilseed rape phenological stage classification. Four Stokes parameters (g0, g1, g2 and g3) and eight sub parameters (degree of polarization m, entropy H, ellipticity angle ?, orientation angle ?, degree of linear polarization Dolp, degree of circular polarization Docp, linear polarization ratio Lpr and circular polarization ratio Cpr) were extracted from a multi-temporal RADARSAT-2 dataset acquired during the whole oilseed rape growth cycle in 2013. Their sensitivities to oilseed rape phenology were analyzed versus five main rape phenology stages. In two groups (two different wave polarimetric states) of this study, g0, g1, g2, g3, m, H, Dolp and Lpr showed high sensitivity to oilseed rape growth stages while ?, ?, Docp and Cpr showed good performance for phenology classification in previous studies, which were quite noisy during the whole oilseed rape growth circle and showed unobvious sensitivity to the crop’s phenology change. The DT algorithms performed well in oilseed rape phenological stage identification. The results were verified at the parcel level with left 40% of the point dataset. Five phenology intervals of oilseed rape were identified with no more than three parameters by simple but robust decision tree algorithm groups. The identified phenology stages agree well with the ground measurements; the overall identification accuracies were 71.18% and 79.71%, respectively. For each growth stage, the best performance occurred at stage S1 with the accuracy of 95.65% for Group 1 and 94.23% for Group 2, and the worst performance occurred at stage S3 and S5 with the values around 60%. Most of the classification errors may resulted from the indistinguishability of S3 and S5 using Stokes-related parameters. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of study site and the oilseed rape parcels distribution in it.</p>
Full article ">Figure 2
<p>Oilseed rape phenological stages, their corresponding BBCH-codes, DAS, plant images taken from Weber and Bleiholder (1990) and field photos taken during field campaign.</p>
Full article ">Figure 3
<p>Flowchart of oilseed rape phenology retrieval from RADARSAT-2 multi-temporal data and DT algorithm.</p>
Full article ">Figure 4
<p>Evolution of the Group1 Stokes parameters versus oilseed rape phenology.</p>
Full article ">Figure 5
<p>Evolution of the Group2 Stokes parameters versus oilseed rape phenology.</p>
Full article ">Figure 6
<p>The evolution of <math display="inline"><semantics> <mi>m</mi> </semantics></math>, <math display="inline"><semantics> <mi>H</mi> </semantics></math>, <math display="inline"><semantics> <mi>χ</mi> </semantics></math> and <math display="inline"><semantics> <mi>φ</mi> </semantics></math> in Group 1 versus rapeseed phenology.</p>
Full article ">Figure 7
<p>The evolution of <math display="inline"><semantics> <mi>m</mi> </semantics></math>, <math display="inline"><semantics> <mi>H</mi> </semantics></math>, <math display="inline"><semantics> <mi>χ</mi> </semantics></math> and <math display="inline"><semantics> <mi>φ</mi> </semantics></math> in Group 2 versus rapeseed phenology.</p>
Full article ">Figure 7 Cont.
<p>The evolution of <math display="inline"><semantics> <mi>m</mi> </semantics></math>, <math display="inline"><semantics> <mi>H</mi> </semantics></math>, <math display="inline"><semantics> <mi>χ</mi> </semantics></math> and <math display="inline"><semantics> <mi>φ</mi> </semantics></math> in Group 2 versus rapeseed phenology.</p>
Full article ">Figure 8
<p>The Group 1 <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>l</mi> <mi>p</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>c</mi> <mi>p</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>p</mi> <mi>r</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>p</mi> <mi>r</mi> </mrow> </semantics></math> on the evolution of rapeseed phenology.</p>
Full article ">Figure 9
<p>The Group 2 <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>l</mi> <mi>p</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>D</mi> <mi>o</mi> <mi>c</mi> <mi>p</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>L</mi> <mi>p</mi> <mi>r</mi> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>C</mi> <mi>p</mi> <mi>r</mi> </mrow> </semantics></math> on the evolution of rapeseed phenology.</p>
Full article ">
25 pages, 24220 KiB  
Article
UAV Remote Sensing Estimation of Rice Yield Based on Adaptive Spectral Endmembers and Bilinear Mixing Model
by Ningge Yuan, Yan Gong, Shenghui Fang, Yating Liu, Bo Duan, Kaili Yang, Xianting Wu and Renshan Zhu
Remote Sens. 2021, 13(11), 2190; https://doi.org/10.3390/rs13112190 - 4 Jun 2021
Cited by 25 | Viewed by 5312
Abstract
The accurate estimation of rice yield using remote sensing (RS) technology is crucially important for agricultural decision-making. The rice yield estimation model based on the vegetation index (VI) is commonly used when working with RS methods, however, it is affected by irrelevant organs [...] Read more.
The accurate estimation of rice yield using remote sensing (RS) technology is crucially important for agricultural decision-making. The rice yield estimation model based on the vegetation index (VI) is commonly used when working with RS methods, however, it is affected by irrelevant organs and background especially at heading stage. The spectral mixture analysis (SMA) can quantitatively obtain the abundance information and mitigate the impacts. Furthermore, according to the spectral variability and information complexity caused by the rice cropping system and canopy characteristics of reflection and scattering, in this study, the multi-endmember extraction by the pure pixel index (PPI) and the nonlinear unmixing method based on the bandwise generalized bilinear mixing model (NU-BGBM) were applied for SMA, and the VIE (VIs recalculated from endmember spectra) was integrated with abundance data to establish the yield estimation model at heading stage. In two paddy fields of different cultivation settings, multispectral images were collected by an unmanned aerial vehicle (UAV) at booting and heading stage. The correlation of several widely-used VIs and rice yield was tested and weaker at heading stage. In order to improve the yield estimation accuracy of rice at heading stage, the VIE and foreground abundances from SMA were combined to develop a linear yield estimation model. The results showed that VIE incorporated with abundances exhibited a better estimation ability than VI alone or the product of VI and abundances. In addition, when the structural difference of plants was obvious, the addition of the product of VIF (VIs recalculated from bilinear endmember spectra) and the corresponding bilinear abundances to the original product of VIE and abundances, enhanced model reliability. VIs using the near-infrared bands improved more significantly with the estimation error below 8.1%. This study verified the validation of the targeted SMA strategy while estimating crop yield by remotely sensed VI, especially for objects with obvious different spectra and complex structures. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>The field images of different rice varieties at heading stage: (<b>a</b>) Longyou-518; (<b>b</b>) Tianlong-660; (<b>c</b>) Liangyou-1128; and (<b>d</b>) unnamed cultivar.</p>
Full article ">Figure 2
<p>The location of the study site and the regions of interest in each plot. (<b>a</b>,<b>b</b>) Study Area 1 Lingshui and (<b>c</b>,<b>d</b>) Study Area 2 Wuxue.</p>
Full article ">Figure 3
<p>The illustration of (<b>a</b>) mini-MCA 12, (<b>b</b>) mini-MCA 6, and (<b>c</b>) UAV.</p>
Full article ">Figure 4
<p>Schematic representation of the multilayer characteristics of paddy rice spectra.</p>
Full article ">Figure 5
<p>The procedure of PPI: (<b>a</b>) Masking; (<b>b</b>) setting threshold; (<b>c</b>) enumerating pixels in n-Dimensional Visualizer; and (<b>d</b>) outputting endmember spectra.</p>
Full article ">Figure 6
<p>The abundance images of (<b>a</b>) foreground and (<b>b</b>) background.</p>
Full article ">Figure 7
<p>The sketch images of the six kinds of delineating sample strategies in the n-Dimensional Visualizer. (<b>a</b>) One foreground and one background; (<b>b</b>) two foregrounds and one background; (<b>c</b>) three foregrounds and one background; (<b>d</b>) four foregrounds and one background; (<b>e</b>) five foregrounds and one background; and (<b>f</b>) six foregrounds and one background.</p>
Full article ">Figure 8
<p>The output endmember spectra of the six strategies. (<b>a</b>) One foreground and one background; (<b>b</b>) two foregrounds and one background; (<b>c</b>) three foregrounds and one background; (<b>d</b>) four foregrounds and one background; (<b>e</b>) five foregrounds and one background; and (<b>f</b>) six foregrounds and one background.</p>
Full article ">Figure 9
<p>The abundance images of (<b>a</b>–<b>e</b>) five foreground and (<b>f</b>) one background endmembers of Study Area 1.</p>
Full article ">Figure 10
<p>The abundance images of (<b>a</b>–<b>d</b>) five foreground and (<b>e</b>) one background endmembers of Study Area 2.</p>
Full article ">Figure 11
<p>The R<sup>2</sup> and RMSE of yield with VI, VI × A, VI<sub>E</sub> × A, and (VI<sub>E</sub> × A + VI<sub>F</sub> × B) of (<b>a</b>) Study Area 1 Lingshui and (<b>b</b>) Study Area 2 Wuxue at heading stage.</p>
Full article ">Figure 12
<p>The linear regression results of optimal indices. (<b>a</b>) Estimated yield vs. observed yield in Study Area 1 Lingshui, and (<b>b</b>) estimated yield vs. observed yield in Study Area 2 Wuxue.</p>
Full article ">Figure 13
<p>The linear regression results of yield and the four products of VARI in Study Area 2 Wuxue.</p>
Full article ">
19 pages, 5064 KiB  
Article
Predicting Table Beet Root Yield with Multispectral UAS Imagery
by Robert Chancia, Jan van Aardt, Sarah Pethybridge, Daniel Cross and John Henderson
Remote Sens. 2021, 13(11), 2180; https://doi.org/10.3390/rs13112180 - 2 Jun 2021
Cited by 6 | Viewed by 3131
Abstract
Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris [...] Read more.
Timely and accurate monitoring has the potential to streamline crop management, harvest planning, and processing in the growing table beet industry of New York state. We used unmanned aerial system (UAS) combined with a multispectral imager to monitor table beet (Beta vulgaris ssp. vulgaris) canopies in New York during the 2018 and 2019 growing seasons. We assessed the optimal pairing of a reflectance band or vegetation index with canopy area to predict table beet yield components of small sample plots using leave-one-out cross-validation. The most promising models were for table beet root count and mass using imagery taken during emergence and canopy closure, respectively. We created augmented plots, composed of random combinations of the study plots, to further exploit the importance of early canopy growth area. We achieved a R2 = 0.70 and root mean squared error (RMSE) of 84 roots (~24%) for root count, using 2018 emergence imagery. The same model resulted in a RMSE of 127 roots (~35%) when tested on the unseen 2019 data. Harvested root mass was best modeled with canopy closing imagery, with a R2 = 0.89 and RMSE = 6700 kg/ha using 2018 data. We applied the model to the 2019 full-field imagery and found an average yield of 41,000 kg/ha (~40,000 kg/ha average for upstate New York). This study demonstrates the potential for table beet yield models using a combination of radiometric and canopy structure data obtained at early growth stages. Additional imagery of these early growth stages is vital to develop a robust and generalized model of table beet root yield that can handle imagery captured at slightly different growth stages between seasons. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The 2018 and 2019 table beet field site locations in Batavia, New York, USA. Lower left map data: Google, Maxar Technologies.</p>
Full article ">Figure 2
<p>RGB (color) bands of MicaSense plot imagery from 9 July 2018 at 11:00. Note the broad range in crop emergence at this early season observation. Each plot is 3 × 0.6 m in dimension and has a 0.02 m ground sampling distance.</p>
Full article ">Figure 3
<p>(<b>Left</b>) Median blue, green, red, red edge, and near-infrared reflectance of each plot’s canopy pixels (background soil masked out) from table beet plots in New York in 2018. The vertical dashed lines separate the data in terms of the growth epochs observed in the 2018 season. (<b>Right</b>) The average and standard deviation of all individual plot median reflectance measurements are shown alongside the range of the individual measurements (X’s).</p>
Full article ">Figure 4
<p>Table beet vegetation classification for four observations of the 2018 plot # 17 using TVI &gt; 7.5 threshold, k-means (two clusters), vegetation-endmember (average of all plot vegetation spectra) SAM &gt; 0.85 threshold (normalized between 0 and 1, where 1 corresponds to a perfect match), and NDVI &gt; 0.5 threshold. The binary vegetation classification masks display vegetation pixels as white and the background soil and shadow as black.</p>
Full article ">Figure 5
<p>Grayscale multispectral bands and vegetation indices for imagery collected in a table beet field on 16 July 2019 (example plot 16) in Batavia, New York.</p>
Full article ">Figure 6
<p>The area-augmented table beet root count <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> (<b>right</b>) leave-one-out cross-validation performance (y-axis) for individual multiple linear regression models. Each model consists of one reflectance band or vegetation index shown on the x-axis combined with canopy area. The DVI paired with table beet canopy area was the best-performing model with <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.70</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>84</mn> </mrow> </semantics></math> roots.</p>
Full article ">Figure 7
<p>Measured vs. predicted table beet root count for the 2018 emergence DVI + Area multiple linear regression model, based on area-augmented imagery (<b>left</b>). The model is trained with 90% (red) and tested with the remaining 10% (green) area augmented study plot imagery. RMSE are displayed for the training data using leave-one-out cross-validation and for the unseen test data. The DVI + Area model is trained with 100% of the 2018 emergence imagery and tested to predict the root count (blue) for 100% of the 2019 area-augmented study plot imagery occurring closest to emergence (<b>right</b>).</p>
Full article ">Figure 8
<p>The area-augmented table beet root mass <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> </mrow> </semantics></math> (<b>left</b>) and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> </mrow> </semantics></math> (<b>right</b>) leave-one-out cross-validation performance (y-axis) for individual multiple linear regression models. Each model consists of one reflectance band or vegetation index shown on the x-axis combined with canopy area. The VDVI paired with canopy area was the best-performing model with <math display="inline"><semantics> <mrow> <msup> <mi>R</mi> <mn>2</mn> </msup> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>0.89</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>R</mi> <mi>M</mi> <mi>S</mi> <mi>E</mi> <mo> </mo> <mo>=</mo> <mo> </mo> <mn>2.5</mn> </mrow> </semantics></math> kg. However, the vertical axis scaling shown for both plots indicates that the radiometric data provide no significant improvement to the table beet root mass model derived from the area-augmented imagery.</p>
Full article ">Figure 9
<p>Measured vs. predicted table beet root mass for the 2018 canopy closing VDVI + Area multiple linear regression model based on area-augmented plot imagery. The model is trained with 90% (red) and tested with the remaining 10% (green) area-augmented study plot imagery. RMSE are displayed for the training data using leave-one-out cross-validation and for the unseen test data.</p>
Full article ">Figure 10
<p>An evaluation of the area-augmented 2018 table beet root mass model on the independent augmented area plot imagery, taken on 16 July 2019. Since we do not have ground truth root mass data for the 2019 plots, we applied the model to the whole field for an approximate assessment. An RGB composite (<b>top</b>) of a large portion of the 2019 field serves as context for the 2 × 2 m grid of the field with the table beet root mass model.</p>
Full article ">Figure 11
<p>The median percentage of all 2018 plots’ table beet roots, falling in each diameter range, are plotted in green inside of a box spanning the inter quantile range (IQR). Outliers are marked with black open circles and fall outside the error bars located at Quantile 3 + 1.5 IQR (above) and Quantile 1–1.5 IQR (below). The optimal percentages of table beet roots falling in each range, as provided by Love Beets USA (a commercial beet grower), is plotted with red filled circles.</p>
Full article ">Figure 12
<p>There was limited radiometric variability for each New York table beet growth stage in 2018. The cumulative experimental plot variance is divided by the mean of each radiometric feature and canopy area. The emergence and canopy closing growth stages exhibited the most variable features amongst the full set of plots, reinforcing the modeling results.</p>
Full article ">
25 pages, 7006 KiB  
Article
Crop Yield Prediction Based on Agrometeorological Indexes and Remote Sensing Data
by Xiufang Zhu, Rui Guo, Tingting Liu and Kun Xu
Remote Sens. 2021, 13(10), 2016; https://doi.org/10.3390/rs13102016 - 20 May 2021
Cited by 17 | Viewed by 5394
Abstract
Timely and reliable estimations of crop yield are essential for crop management and successful food trade. In previous studies, remote sensing data or climate data are often used alone in statistical yield estimation models. In this study, we synthetically used agrometeorological indicators and [...] Read more.
Timely and reliable estimations of crop yield are essential for crop management and successful food trade. In previous studies, remote sensing data or climate data are often used alone in statistical yield estimation models. In this study, we synthetically used agrometeorological indicators and remote sensing vegetation parameters to estimate maize yield in Jilin and Liaoning Provinces of China. We applied two methods to select input variables, used the random forest method to establish yield estimation models, and verified the accuracy of the models in three disaster years (1997, 2000, and 2001). The results show that the R2 values of the eight yield estimation models established in the two provinces were all above 0.7, Lin’s concordance correlation coefficients were all above 0.84, and the mean absolute relative errors were all below 0.14. The mean absolute relative error of the yield estimations in the three disaster years was 0.12 in Jilin Province and 0.13 in Liaoning Province. A model built using variables selected by a two-stage importance evaluation method can obtain a better accuracy with fewer variables. The final yield estimation model of Jilin province adopts eight independent variables, and the final yield estimation model of Liaoning Province adopts nine independent variables. Among the 11 adopted variables in two provinces, ATT (accumulated temperature above 10 °C) variables accounted for the highest proportion (54.54%). In addition, the GPP (gross primary production) anomaly in August, NDVI (Normalized Difference Vegetation Index) anomaly in August, and standardized precipitation index with a two-month scale in July were selected as important modeling variables by all methods in the two provinces. This study provides a reference method for the selection of modeling variables, and the results are helpful for understanding the impact of climate on potential yield. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Location of the study area.</p>
Full article ">Figure 2
<p>Technical flowchart of the study.</p>
Full article ">Figure 3
<p>Gray correlation degree between the climate yield and the agricultural disaster data in Jilin (<b>a</b>) and Liaoning (<b>b</b>).</p>
Full article ">Figure 4
<p>The accuracy of yield estimation models built using variables selected by random forest ((<b>a</b>) GPP group in Jilin Province, (<b>b</b>) NDVI group in Jilin Province, (<b>c</b>) GPP group in Liaoning Province, and (<b>d</b>) NDVI group in Liaoning Province).</p>
Full article ">Figure 5
<p>The modeling process according to the two-stage importance evaluation method in Jilin Province.</p>
Full article ">Figure 6
<p>The modeling process according to the two-stage importance evaluation method in Liaoning Province.</p>
Full article ">Figure 7
<p>Area of cultivated land suffering agricultural disasters in Jilin (<b>a</b>) and Liaoning (<b>b</b>) Provinces.</p>
Full article ">Figure 8
<p>The modeling accuracy in three severe disaster years in Jilin (<b>a</b>) and Liaoning (<b>b</b>) Provinces.</p>
Full article ">Figure 9
<p>Correlation coefficients of the candidate input variables in Jilin (<b>a</b>) and Liaoning (<b>b</b>) Provinces.</p>
Full article ">Figure 10
<p>The resulting modeling accuracy using only the technical yield and by adding different variable groups on the basis of the technical yield in Jilin (<b>a</b>) and Liaoning (<b>b</b>) Provinces.</p>
Full article ">Figure 11
<p>The cumulative number of times that variables were selected as one of the input variables for building models 1–4.</p>
Full article ">Figure 12
<p>Variance in the technical yield in Jilin (<b>a</b>) and Liaoning (<b>b</b>) Provinces.</p>
Full article ">Figure 13
<p>The proportion of irrigated farmland to farmland.</p>
Full article ">Figure 14
<p>The climate yield in Jilin and Liaoning Provinces during the study period.</p>
Full article ">
20 pages, 5212 KiB  
Article
Spatiotemporal Changes of Winter Wheat Planted and Harvested Areas, Photosynthesis and Grain Production in the Contiguous United States from 2008–2018
by Xiaocui Wu, Xiangming Xiao, Jean Steiner, Zhengwei Yang, Yuanwei Qin and Jie Wang
Remote Sens. 2021, 13(9), 1735; https://doi.org/10.3390/rs13091735 - 29 Apr 2021
Cited by 12 | Viewed by 4087
Abstract
Winter wheat is a main cereal crop grown in the United States of America (USA), and the USA is the third largest wheat exporter globally. Timely and reliable in-season forecast and year-end estimation of winter wheat grain production in the USA are needed [...] Read more.
Winter wheat is a main cereal crop grown in the United States of America (USA), and the USA is the third largest wheat exporter globally. Timely and reliable in-season forecast and year-end estimation of winter wheat grain production in the USA are needed for regional and global food security. In this study, we assessed the consistency between the agricultural statistical reports and satellite-based data for winter wheat over the contiguous US (CONUS) at both the county and national scales. First, we compared the planted area estimates from the National Agricultural Statistics Service (NASS) and the Cropland Data Layer (CDL) from 2008–2018. Second, we investigated the relationship between gross primary production (GPP) estimated by the vegetation photosynthesis model (VPM) and grain production from the NASS. Lastly, we explored the in-season utility of GPPVPM in monitoring seasonal production. Strong spatiotemporal consistency of planted areas was found between the NASS and CDL datasets. However, in the Southern Great Plains, both the CDL and NASS planted acreage were noticeable larger (>20%) than the NASS harvested area, where some winter wheat fields were used as forage for cattle grazing. County-level GPPVPM was linearly related with grain production of winter wheat, with an R2 value of 0.68 across the CONUS. The relationships between grain production and GPPVPM in those counties without a substantial difference (<20%) between planted and harvested area were much stronger and their harvest index (HIGPP) values ranged from 0.2–0.3. GPPVPM in May could explain about 70–90% of the variance of winter wheat grain production. Our findings highlight the potential of GPPVPM in winter wheat monitoring, especially for those high harvested/planted ratio, which could provide useful data to guide planning and marketing for decision makers, stakeholders, and the public. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Annual maps of county-level winter wheat over CONUS for year 2010: (<b>a</b>) CDL-derived planted area (plt_CDL), (<b>b</b>) NASS planted area (plt_NASS), (<b>c</b>) NASS harvested area (harv_NASS), (<b>d</b>) NASS grain yield, (<b>e</b>) NASS grain production, and (<b>f</b>) annual averaged GPP<sub>VPM</sub> of winter wheat.</p>
Full article ">Figure 2
<p>Interannual changes of winter wheat planted and harvested areas in the CONUS from 2008–2018 (plt_CDL, plt_NASS, harv_NASS): (<b>a</b>) planted area and harvested area; (<b>b</b>) anomalies of planted area and harvested area for the multi-year averages from 2008–2018.</p>
Full article ">Figure 3
<p>Comparisons between winter wheat planted areas and harvested area over CONUS from 2008–2018 at the county scale from the CDL and NASS datasets (plt_CDL, plt_NASS, harv_NASS). (<b>a</b>) Planted area, (<b>b</b>,<b>c</b>) planted area vs. harvested area, (<b>d</b>–<b>f</b>) the spatial discrepancy (relative difference, %) in 2010 for planted area (<b>d</b>), and between planted area and harvested area (<b>e</b>,<b>f</b>). Year 2011 is a typical drought year over the winter wheat belt, and 2016 is a wet year.</p>
Full article ">Figure 4
<p>The distributions of interannual trends of county-level winter wheat planted area over CONUS between 2008 and 2018 derived from the CDL and NASS datasets. (<b>a</b>) Trends from the CDL dataset, (<b>b</b>) trends from the NASS dataset, (<b>c</b>) <span class="html-italic">p</span>-value for the CDL dataset, S: significant, <span class="html-italic">p</span>-value &lt; 0.05, NS: not significant, <span class="html-italic">p</span>-value &gt; 0.05, (<b>d</b>) <span class="html-italic">p</span>-value for the NASS data, (<b>e</b>) the relationships between the trends of planted areas calculated from CDL and NASS, (<b>f</b>) the histograms of the trends of planted areas calculated from CDL and NASS.</p>
Full article ">Figure 5
<p>The relationships between county-level winter wheat grain production and cropping areas in the CONUS from 2008 to 2018 from the CDL and NASS datasets (plt_CDL, plt_NASS, harv_NASS). (<b>a</b>) Grain production versus CDL planted area, (<b>b</b>) grain production versus NASS planted area, and (<b>c</b>) grain production versus NASS harvested area. The black solid line is the linear regression result for all the counties from 2008 to 2018.</p>
Full article ">Figure 6
<p>Interannual changes of (<b>a</b>) NASS winter wheat grain production (prod_NASS) and total GPP estimated from VPM (GPP<sub>VPM</sub>), (<b>b</b>) anomalies of prod_NASS and GPP<sub>VPM</sub> for the mean of 2008–2018.</p>
Full article ">Figure 7
<p>Interannual trends of NASS winter wheat grain production (prod_NASS) and GPP<sub>VPM</sub> from 2008–2018 in the CONUS at a county scale. (<b>a</b>) Changing trend for NASS grain production from 2008–2018; (<b>b</b>) changing trend for GPP<sub>VPM</sub> from 2008–2018; (<b>c</b>) <span class="html-italic">p</span>-value of the linear regression model for calculation of NASS grain production trends, S means significant trend with <span class="html-italic">p</span> &lt; 0.05, and NS means not significant trend with <span class="html-italic">p</span> &gt; 0.05; (<b>d</b>) similar to (<b>c</b>), but for the trend of GPP<sub>VPM</sub>; (<b>e</b>) linear regression between the trend of NASS grain production and of GPP<sub>VPM</sub> for those counties with continuous NASS grain production and GPP<sub>VPM</sub> data from 2008–2018; (<b>f</b>) histograms for the trend of NASS grain production and of GPP<sub>VPM</sub>.</p>
Full article ">Figure 8
<p>(<b>a</b>) The relationships between county-level winter wheat GPP<sub>VPM</sub> and grain production in the CONUS from 2008 to 2018, labeled by year, and the black solid line is the linear regression results for all the county-year data. (<b>b</b>) Linear regression between GPP<sub>VPM</sub> and NASS grain production from 2008 to2018, labeled by the relative difference between CDL-derived planted area (plt_CDL) and NASS harvested area (harv_NASS). (<b>c</b>) Density plot of the relationship between HI<sub>GPP</sub> and the difference between plt_CDL and harv_NASS. (<b>d</b>) Histogram of HI<sub>GPP</sub> for all the county-years with a difference of plt_CDL and harv_NASS less than &lt;20%.</p>
Full article ">Figure 9
<p>Spatial distribution of harvest index derived from GPP<sub>VPM</sub> and NASS grain production (HI<sub>GPP</sub>) in 2010 for (<b>a</b>) all the counties and (<b>b</b>) counties with small differences (&lt;20%) between CDL-derived planted area and NASS harvested area.</p>
Full article ">Figure 10
<p>The prediction skill of the linear regression models that predict county-level crop grain production from NASS statistics by using accumulative GPP estimates over time (8-day interval) from the VPM and CDL cropping area over the years for winter wheat from 2008–2018 over (<b>a</b>) all counties in CONUS; (<b>b</b>) all counties in Montana; (<b>c</b>) all counties in Washington; (<b>d</b>) all counties in Kansas; (<b>e</b>) all counties in Oklahoma; (<b>f</b>) CONUS for all counties with differences less than 20% between CDL-derived planted area and NASS harvested area.</p>
Full article ">

Review

Jump to: Research, Other

19 pages, 3674 KiB  
Review
Monitoring and Analyzing Yield Gap in Africa through Soil Attribute Best Management Using Remote Sensing Approaches: A Review
by Keltoum Khechba, Ahmed Laamrani, Driss Dhiba, Khalil Misbah and Abdelghani Chehbouni
Remote Sens. 2021, 13(22), 4602; https://doi.org/10.3390/rs13224602 - 16 Nov 2021
Cited by 18 | Viewed by 5890
Abstract
Africa has the largest population growth rate in the world and an agricultural system characterized by the predominance of smallholder farmers. Improving food security in Africa will require a good understanding of farming systems yields as well as reducing yield gaps (i.e., the [...] Read more.
Africa has the largest population growth rate in the world and an agricultural system characterized by the predominance of smallholder farmers. Improving food security in Africa will require a good understanding of farming systems yields as well as reducing yield gaps (i.e., the difference between potential yield and actual farmer yield). To this end, crop yield gap practices in African countries need to be understood to fill this gap while decreasing the environmental impacts of agricultural systems. For instance, the variability of yields has been demonstrated to be strongly controlled by soil fertilizer use, irrigation management, soil attribute, and the climate. Consequently, the quantitative assessment and mapping information of soil attributes such as nitrogen (N), phosphorus (P), potassium (K), soil organic carbon (SOC), moisture content (MC), and soil texture (i.e., clay, sand and silt contents) on the ground are essential to potentially reducing the yield gap. However, to assess, measure, and monitor these soil yield-related parameters in the field, there is a need for rapid, accurate, and inexpensive methods. Recent advances in remote sensing technologies and high computational performances offer a unique opportunity to implement cost-effective spatiotemporal methods for estimating crop yield with important levels of scalability. However, researchers and scientists in Africa are not taking advantage of the opportunity of increasingly available geospatial remote sensing technologies and data for yield studies. The objectives of this report are to (i) conduct a review of scientific literature on the current status of African yield gap analysis research and their variation in regard to soil properties management by using remote sensing techniques; (ii) review and describe optimal yield practices in Africa; and (iii) identify gaps and limitations to higher yields in African smallholder farms and propose possible improvements. Our literature reviewed 80 publications and covered a period of 22 years (1998-2020) over many selected African countries with a potential yield improvement. Our results found that (i) the number of agriculture yield-focused remote sensing studies has gradually increased, with the largest proportion of studies published during the last 15 years; (ii) most studies were conducted exclusively using multispectral Landsat and Sentinel sensors; and (iii) over the past decade, hyperspectral imagery has contributed to a better understanding of yield gap analysis compared to multispectral imagery; (iv) soil nutrients (i.e., NPK) are not the main factor influencing the studied crop productivity in Africa, whereas clay, SOC, and soil pH were the most examined soil properties in prior papers. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Map showing the 13 African countries investigated (dark grey color) in this study.</p>
Full article ">Figure 2
<p>Different production levels yield gap as determined by growth defining, limiting, and reducing factors.</p>
Full article ">Figure 3
<p>Stacked bar charts show the number of annual publications between 1998 and 2020 in Africa in the Scopus database that have: (<b>A</b>) “Yield gap” in their title, abstract, or keywords; (<b>B</b>) “Yield gap” and “at least one soil property” in their title, abstract, or keywords. (<b>C</b>) The primary platform or sensors used for the yield gap assessment.</p>
Full article ">Figure 4
<p>Histogram showing the number of publications in Africa between 1998 and 2020 with regards to the used crops with yield gap.</p>
Full article ">Figure 5
<p>Number of annual publications in the selected Africa and Asian countries from 1998 to 2020 found in the Scopus database with the “Landsat” keyword in their title, abstract, or keywords.</p>
Full article ">
46 pages, 49047 KiB  
Review
Remote Sensing Applications in Sugarcane Cultivation: A Review
by Jaturong Som-ard, Clement Atzberger, Emma Izquierdo-Verdiguier, Francesco Vuolo and Markus Immitzer
Remote Sens. 2021, 13(20), 4040; https://doi.org/10.3390/rs13204040 - 10 Oct 2021
Cited by 67 | Viewed by 21449
Abstract
A large number of studies have been published addressing sugarcane management and monitoring to increase productivity and production as well as to better understand landscape dynamics and environmental threats. Building on existing reviews which mainly focused on the crop’s spectral behavior, a comprehensive [...] Read more.
A large number of studies have been published addressing sugarcane management and monitoring to increase productivity and production as well as to better understand landscape dynamics and environmental threats. Building on existing reviews which mainly focused on the crop’s spectral behavior, a comprehensive review is provided which considers the progress made using novel data analysis techniques and improved data sources. To complement the available reviews, and to make the large body of research more easily accessible for both researchers and practitioners, in this review (i) we summarized remote sensing applications from 1981 to 2020, (ii) discussed key strengths and weaknesses of remote sensing approaches in the sugarcane context, and (iii) described the challenges and opportunities for future earth observation (EO)-based sugarcane monitoring and management. More than one hundred scientific studies were assessed regarding sugarcane mapping (52 papers), crop growth anomaly detection (11 papers), health monitoring (14 papers), and yield estimation (30 papers). The articles demonstrate that decametric satellite sensors such as Landsat and Sentinel-2 enable a reliable, cost-efficient, and timely mapping and monitoring of sugarcane by overcoming the ground sampling distance (GSD)-related limitations of coarser hectometric resolution data, while offering rich spectral information in the frequently recorded data. The Sentinel-2 constellation in particular provides fine spatial resolution at 10 m and high revisit frequency to support sugarcane management and other applications over large areas. For very small areas, and in particular for up-scaling and calibration purposes, unmanned aerial vehicles (UAV) are also useful. Multi-temporal and multi-source data, together with powerful machine learning approaches such as the random forest (RF) algorithm, are key to providing efficient monitoring and mapping of sugarcane growth, health, and yield. A number of difficulties for sugarcane monitoring and mapping were identified that are also well known for other crops. Those difficulties relate mainly to the often (i) time consuming pre-processing of optical time series to cope with atmospheric perturbations and cloud coverage, (ii) the still important lack of analysis-ready-data (ARD), (iii) the diversity of environmental and growth conditions—even for a given country—under which sugarcane is grown, superimposing non-crop related radiometric information on the observed sugarcane crop, and (iv) the general ill-posedness of retrieval and classification approaches which adds ambiguity to the derived information. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Sugarcane crop cycle and main phenological phases. The illustration has been adapted from Molijn et al. [<a href="#B44-remotesensing-13-04040" class="html-bibr">44</a>] and NaanDanJain Irrigation Ltd. [<a href="#B60-remotesensing-13-04040" class="html-bibr">60</a>]. Additionally included are RGB aerial views from drone-mounted camera (source: UAV DJI phantom 3 professional captured in 2018) as well as an indication regarding the major risk factors per phenological phase.</p>
Full article ">Figure 2
<p>Sugarcane crop cycle of four main sugar producing countries.</p>
Full article ">Figure 3
<p>Characteristics of main sugarcane planting patterns: (<b>A</b>) single row planting; (<b>B</b>) double row planting (source: adapted from Wang et al. [<a href="#B83-remotesensing-13-04040" class="html-bibr">83</a>] and Sanches et al. [<a href="#B84-remotesensing-13-04040" class="html-bibr">84</a>]).</p>
Full article ">Figure 4
<p>LAI at first ratoon of sugarcane crop compared to cumulative growing degree days (GDD) in different sugarcane growth phases (modified from Teruel et al. [<a href="#B98-remotesensing-13-04040" class="html-bibr">98</a>]). Green dots represent LAI values from observations, while the red line is cumulative GDD in units of degrees Celsius (°C).</p>
Full article ">Figure 5
<p>Spectral signatures of a representative sugarcane crop (provided by Hamzeh et al. [<a href="#B45-remotesensing-13-04040" class="html-bibr">45</a>]). The red triangles represent Landsat-7 ETM+ bands 1–5 and 7 (the red line is only drawn to enhance visibility), while the black line shows the 198 continuous spectral signatures based on hyperspectral bands from Hyperion (note that the straight segments of the black line are linear interpolations within the water vapor absorption bands).</p>
Full article ">Figure 6
<p>The different viewing angles of sugarcane crops (contributed by Moriya et al. [<a href="#B136-remotesensing-13-04040" class="html-bibr">136</a>]). (<b>a</b>) Comparison between spectral signature profile at nadir viewing and the BRDF model curve modified from Walthall’s study. (<b>b</b>) Spectral reflectance at ten viewing angles calculated from Walthall’s BRDF correction model.</p>
Full article ">Figure 7
<p>Spectral signature profiles of sugarcane crop in 2019/2020 from 25 field sampling points in Udon Thani province, Thailand. Average spectral reflectance values were extracted from Landsat-8 OLI images time series with blue (blue line), green (green line), red (red line), near-infrared (NIR) (gray line), shortwave infrared (SWIR 1) (yellow line), and SWIR 2 (orange line).</p>
Full article ">Figure 8
<p>Example of the smoothed mean normalized difference vegetation index (NDVI) prolife (green line) of sugarcane crop in 2019/2020, Udon Thani province, Thailand. The smoothed line of mean NDVI is presented in the dark area. The dot lines at vertical axis are separated in different sugarcane phases.</p>
Full article ">Figure 9
<p>Distribution of sugarcane related publications across journals in Scopus database and Web of Science. The data assessed cover the period 1981 to 2020. The frequency distribution of papers within the different journals is shown.</p>
Full article ">Figure 10
<p>Temporal distribution of the analyzed remote sensing and earth observation (EO) publications in all journals from 1981 to 2020.</p>
Full article ">Figure 11
<p>Temporal distribution of the analyzed publication of remote sensing and earth observation (EO) journals from 1981 to 2020.</p>
Full article ">Figure 12
<p>Regional distribution of the analyzed all publications by country in the period 1981 to 2020 based on affiliation of the first author.</p>
Full article ">Figure 13
<p>Number of sensors used vs. temporal ranges in five years.</p>
Full article ">Figure 14
<p>Scatter plot of study area and sensor spatial resolution of the data.</p>
Full article ">Figure 15
<p>Overall accuracy (OA) (%) achieved with different classification methods (abbreviations were referred in <a href="#remotesensing-13-04040-t004" class="html-table">Table 4</a>).</p>
Full article ">Figure 16
<p>Relation between overall accuracy (OA) (%) and spatial resolution of the image data in meter (m).</p>
Full article ">Figure 17
<p>Bare plot of different classifier methods with the year of publication.</p>
Full article ">Figure 18
<p>Example of sugarcane canopy, water body, and pasture field displayed in a UAV image observed on 16 May 2018. The UAV image shows sugarcane phenology in grand growth stages.</p>
Full article ">Figure 19
<p>Comparison of single (<b>a</b>), mean (<b>b</b>), and max. NDVI (<b>c</b>) pixel values of Sentinel-2 (S2) image data (November–December 2019) with 10 m of spatial resolution. These data (cloud cover &lt; 40%) were acquired for the Udon Thani province, Thailand.</p>
Full article ">Figure 20
<p>3-dimensional space for visualization generated by Wang et al. [<a href="#B35-remotesensing-13-04040" class="html-bibr">35</a>]. The 3-band NDVI shows clear clusters for different land use classes, representative for different phenological stages (germination, grand growth and ripening) of sugarcane.</p>
Full article ">Figure 21
<p>NDVI profile and thermal time described by El Hajj et al. [<a href="#B220-remotesensing-13-04040" class="html-bibr">220</a>]; Blue dots represent several sugarcane fields in Reunion, Island. Red dot lines are well separating the different membership values.</p>
Full article ">Figure 22
<p>The 3-dimensional model of sugarcane height was generated using UAV DJI phantom 3 professional observed on 16 December 2020 at field scale in Northeast of Thailand; crop surface model (CSM) (top level) subtracted by the digital surface model (DSM) (middle level) and digital terrain model (DTM) (bottom level).</p>
Full article ">Figure 23
<p>Measured vs. estimated canopy nitrogen contents (%N) for different sugarcane varieties (adopted from Miphokasap and Wannasiri [<a href="#B127-remotesensing-13-04040" class="html-bibr">127</a>]). (<b>a</b>) First derivative spectrum (FDS), (<b>b</b>) continuum-removed derivative reflectance (CRDR), and (<b>c</b>) band depth (BD) were used to calibrate three prediction models using the SVR method.</p>
Full article ">Figure 24
<p>Density of canopy nitrogen concentration (% nitrogen) in sugarcane fields; the spatial map was generated by Miphokasap and Wannasiri [<a href="#B127-remotesensing-13-04040" class="html-bibr">127</a>]; The different densities were produced using a SVR model and Hyperion satellite image.</p>
Full article ">Figure 25
<p>Sugarcane yield estimation model from Yu et al. [<a href="#B245-remotesensing-13-04040" class="html-bibr">245</a>]; plant height (PH) from LiDAR-derived data was compared to observed height (cm) (<b>a</b>); sugarcane yield (t/ha) prediction was performed from the model (<b>b</b>).</p>
Full article ">Figure 26
<p>Distribution of smoothed time series by Duveiller et al. [<a href="#B160-remotesensing-13-04040" class="html-bibr">160</a>]; the presentation demonstrated the 5th–95th, 25th–75th, and 50th percentiles of pixel with a crop specific spatial purity above or equal to 75%.</p>
Full article ">

Other

Jump to: Research, Review

14 pages, 2061 KiB  
Technical Note
Differentiate Soybean Response to Off-Target Dicamba Damage Based on UAV Imagery and Machine Learning
by Caio Canella Vieira, Shagor Sarkar, Fengkai Tian, Jing Zhou, Diego Jarquin, Henry T. Nguyen, Jianfeng Zhou and Pengyin Chen
Remote Sens. 2022, 14(7), 1618; https://doi.org/10.3390/rs14071618 - 28 Mar 2022
Cited by 14 | Viewed by 3410
Abstract
The wide adoption of dicamba-tolerant (DT) soybean has led to numerous cases of off-target dicamba damage to non-DT soybean and dicot crops. This study aimed to develop a method to differentiate soybean response to dicamba using unmanned-aerial-vehicle-based imagery and machine learning models. Soybean [...] Read more.
The wide adoption of dicamba-tolerant (DT) soybean has led to numerous cases of off-target dicamba damage to non-DT soybean and dicot crops. This study aimed to develop a method to differentiate soybean response to dicamba using unmanned-aerial-vehicle-based imagery and machine learning models. Soybean lines were visually classified into three classes of injury, i.e., tolerant, moderate, and susceptible to off-target dicamba. A quadcopter with a built-in RGB camera was used to collect images of field plots at a height of 20 m above ground level. Seven image features were extracted for each plot, including canopy coverage, contrast, entropy, green leaf index, hue, saturation, and triangular greenness index. Classification models based on artificial neural network (ANN) and random forest (RF) algorithms were developed to differentiate the three classes of response to dicamba. Significant differences for each feature were observed among classes and no significant differences across fields were observed. The ANN and RF models were able to precisely distinguish tolerant and susceptible lines with an overall accuracy of 0.74 and 0.75, respectively. The imagery-based classification model can be implemented in a breeding program to effectively differentiate phenotypic dicamba response and identify soybean lines with tolerance to off-target dicamba damage. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Ground-based classification scale of field plots to off-target dicamba damage with three differential phenotypic classes including tolerant, moderate, and susceptible. In this figure, the tolerant plot is the conventional breeding line S16-12774C, moderate is the conventional breeding line PR17-482, and susceptible is the GT commercial check AG 4135 (Monsanto Co., Creve Coeur, MO, USA).</p>
Full article ">Figure 2
<p>Distribution of dicamba response classification in each location and combined across all locations. Overall, approximately 26.7% of the plots were classified as tolerant, 36.4% as moderate, and 36.9% as susceptible.</p>
Full article ">Figure 3
<p>Standardized features distribution across all fields and dicamba response classes. Overall, significant differences among classes were identified for all features, of which higher values for canopy coverage, entropy, GLI, hue, Sa, and TGI indicate tolerance to dicamba, whereas higher values of contrast indicate susceptibility.</p>
Full article ">Figure 4
<p>Feature importance represented as the mean decrease Gini coefficient for the seven image features included in the RF model.</p>
Full article ">
17 pages, 3896 KiB  
Technical Note
Optimization of Topdressing for Winter Wheat by Accurate Growth Monitoring and Improved Production Estimation
by Jingchun Ji, Jianli Liu, Jingjing Chen, Yujie Niu, Kefan Xuan, Yifei Jiang, Renhao Jia, Can Wang and Xiaopeng Li
Remote Sens. 2021, 13(12), 2349; https://doi.org/10.3390/rs13122349 - 16 Jun 2021
Cited by 2 | Viewed by 2422
Abstract
Topdressing accounts for approximately 40% of the total nitrogen (N) application of winter wheat on the Huang-Huai-Hai Plain in China. However, N use efficiency of topdressing is low due to the inadaptable topdressing method used by local farmers. To improve the N use [...] Read more.
Topdressing accounts for approximately 40% of the total nitrogen (N) application of winter wheat on the Huang-Huai-Hai Plain in China. However, N use efficiency of topdressing is low due to the inadaptable topdressing method used by local farmers. To improve the N use efficiency of winter wheat, an optimization method for topdressing (THP) is proposed that uses unmanned aerial vehicle (UAV)-based remote sensing to accurately acquire the growth status and an improved model for growth potential estimation and optimization of N fertilizer amount for topdressing (NFT). The method was validated and compared with three other methods by a field experiment: the conventional local farmer’s method (TLF), a nitrogen fertilization optimization algorithm (NFOA) proposed by Raun and Lukina (TRL) and a simplification introduced by Li and Zhang (TLZ). It shows that when insufficient basal fertilizer was provided, the proposed method provided as much NFT as the TLF method, i.e., 25.05% or 11.88% more than the TRL and TLZ methods and increased the yields by 4.62% or 2.27%, respectively; and when sufficient basal fertilizer was provided, the THP method followed the TRL and TLZ methods to reduce NFT but maintained as much yield as the TLF method with a decrease of NFT by 4.20%. The results prove that THP could enhance crop production under insufficient N preceding conditions by prescribing more fertilizer and increase nitrogen use efficiency (NUE) by lowering the fertilizer amount when enough basal fertilizer is provided. Full article
(This article belongs to the Special Issue Advances in Remote Sensing for Crop Monitoring and Yield Estimation)
Show Figures

Figure 1

Figure 1
<p>Calculation steps of 4 topdressing methods for winter wheat. (T<sub>HP</sub>, T<sub>RL</sub>, T<sub>LZ</sub> and T<sub>LF</sub> are different topdressing methods; RN: recommended nitrogen fertilizer amount for topdressing, RV: relative volume, AGB: above ground biomass, N<sub>t</sub>: nitrogen concentration in aboveground part at topdressing period, N<sub>h</sub>: nitrogen concentration in aboveground part at harvest, INSEY: in-season estimates of grain yield, PGY: predicted grain yield, FNUP: forage nitrogen uptake, NUP: N uptake in the aboveground part; the subscripts represent different times, i.e., t represents topdressing and h represents harvest, except for AGB<sub>th</sub>, which represents a threshold of AGB at topdressing).</p>
Full article ">Figure 2
<p>Study site and the experimental design. (<b>a</b>) The location of the Fengqiu Agro-ecological experimental station, CAS. (<b>b</b>) Experimental design (T<sub>HP</sub>, T<sub>LF</sub>, T<sub>RL</sub>, T<sub>LZ</sub> and T<sub>0</sub> are different topdressing methods, and basal fertilizer levels (B<sub>0</sub>, B<sub>1</sub> and B<sub>2</sub>) are represented by different colors).</p>
Full article ">Figure 3
<p>Rainfall and irrigation and mean temperature recorded during the 2018 to 2019 wheat growing seasons in Fengqiu Agro-ecological experimental station, CAS.</p>
Full article ">Figure 4
<p>UAV-based images of winter wheat under different basal fertilizer levels. (<b>a</b>) RGB. (<b>b</b>) NDVI. (<b>c</b>) Coverage.</p>
Full article ">Figure 5
<p>Illustration of formulas used in T<sub>HP</sub>, T<sub>RL</sub> and T<sub>LZ</sub> obtained from the auxiliary experiment. (<b>a</b>) Predicting potential grain yield (PGY) from INSEY (in-season estimate of grain yield). (<b>b</b>) Predicting forage nitrogen uptake (FNU<sub>NDVI</sub>) from the normalized difference vegetation index (NDVI). (<b>c</b>) Predicting nitrogen concentration in the aboveground part at topdressing (N<sub>t</sub>) from NDVI. (<b>d</b>) Predicting aboveground biomass at topdressing, AGB<sub>t</sub> from relative volume.</p>
Full article ">Figure 6
<p>Recommended nitrogen fertilizer amount for topdressing for each unit. (T<sub>HP</sub>, T<sub>LF</sub>, T<sub>RL</sub>, T<sub>LZ</sub>, and T<sub>0</sub> represent different topdressing methods).</p>
Full article ">Figure 7
<p>Recommended nitrogen fertilizer amount for topdressing under different basal fertilizer levels. (B<sub>0</sub>, B<sub>1</sub> and B<sub>2</sub> represent basal fertilizer levels of 0, 57 and 114 kg∙ha<sup>−1</sup>, respectively; T<sub>HP</sub>, T<sub>LF</sub>, T<sub>RL</sub>, and T<sub>LZ</sub> represent different topdressing methods; different letters indicate significance using Fischer’s protected least significant difference at <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">Figure 8
<p>Growing status of winter wheat after topdressing. (<b>a</b>–<b>c</b>) Cumulative graphs of above ground biomass. (<b>d</b>–<b>f</b>) Cumulative graphs of plant height. (<b>g</b>–<b>i</b>) Decreasing graphs of nitrogen concentration in above ground part. (<b>j</b>–<b>l</b>) Decreasing graphs of chlorophyll concentration. (B<sub>0</sub>, B<sub>1</sub> and B<sub>2</sub> represent basal fertilizer levels of 0, 57 and 114 kg∙N∙ha<sup>−1</sup>, respectively; T<sub>HP</sub>, T<sub>LF</sub>, T<sub>RL</sub>, T<sub>LZ</sub>, and T<sub>0</sub> represent different topdressing methods; different letters indicate significance using Fischer’s protected least significant difference at <span class="html-italic">p</span> &lt; 0.05 at the milk ripe stage).</p>
Full article ">Figure 9
<p>Grain yield and straw biomass of winter wheat under different topdressing methods. (<b>a</b>–<b>c</b>) Grain yields of different topdressing methods under three basal fertilizer levels. (<b>d</b>–<b>f</b>) Straw biomasses of different topdressing methods under three basal fertilizer levels. (B<sub>0</sub>, B<sub>1</sub>, and B<sub>2</sub> represent basal fertilizer levels of 0, 57 and 114 kg∙N∙ha<sup>−1</sup>, respe (B<sub>0</sub>, B<sub>1</sub>, and B<sub>2</sub> represent basal fertilizer levels of 0, 57 and 114 kg∙N∙ha<sup>−1</sup>, respectively; T<sub>HP</sub>, T<sub>LF</sub>, T<sub>RL</sub>, T<sub>PGY</sub>, and T<sub>LZ</sub> represent different topdressing methods; different letters indicate significance using Fischer’s protected least significant difference at <span class="html-italic">p</span> &lt; 0.05).</p>
Full article ">
Back to TopTop