[go: up one dir, main page]

 
 
remotesensing-logo

Journal Browser

Journal Browser

The Emerging Trends and Applications of Big Data and Machine Learning/Artificial Intelligence (AI) in Remote Sensing

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "AI Remote Sensing".

Deadline for manuscript submissions: closed (31 August 2022) | Viewed by 55267

Special Issue Editors


E-Mail Website
Guest Editor
School of Computing, Mathematics & Digital Technology, Manchester Metropolitan University, Manchester M15 6BH, UK
Interests: big data/machine learning; artificial intelligence; parallel and distributed computing
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Methodologies for Environmental Analysis (IMAA), National Research Council (CNR), C.da S. Loja, 85050 Tito, PZ, Italy
Interests: hyperspectral remote sensing VSWIR-LWIR; sensor data calibration and pre-processing; field spectroscopy; retrieval of surfaces parameters; soil spectral characterization and geology; archaeological site analysis
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Remotely sensed data generated by various platforms (e.g. satellite, manned aircraft, unmanned aerial vehicle and ground-based systems) is a unique source of big data, which has great potential for informative decision making in many domains, including agriculture, environment, business activities, and transport.  Recent advances in data science and AI/machine learning have shown a lot of promise in processing, management and analysing such large and heterogeneous data sources at both local and global scales for various tasks, including land use and land cover mapping (classifications), object-based image analysis (segmentation, object detection), and quantitative modelling (plant biophysical/biochemical parameter retrieval, yield estimation, ecological assessment). This special issue aims at providing an updated, refreshing view of current developments/emerging trends and applications in the field. The ultimate goal is to promote research and sustainable development of advanced big data analytics and AI/machine learning schemes for efficient analysis of remotely sensed data.

Prof. Dr. Liangxiu Han
Prof. Dr. Wenjiang Huang
Prof. Dr. Yanbo Huang
Prof. Dr. Jiali Shang
Dr. Stefano Pignatti
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Big data analytics/AI/machine learning
  • Land use and land cover mapping (classifications)
  • Object-based image analysis (segmentation, object detection)
  • Quantitative modelling (plant biophysical/biochemical parameter retrieval, yield estimation, ecological assessment)
  • Remote sensing applications (e.g., Agriculture, Environment)

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 1910 KiB  
Article
Detection of Glass Insulators Using Deep Neural Networks Based on Optical Imaging
by Jinyu Wang, Yingna Li and Wenxiang Chen
Remote Sens. 2022, 14(20), 5153; https://doi.org/10.3390/rs14205153 - 15 Oct 2022
Cited by 11 | Viewed by 2335
Abstract
As the pre-part of tasks such as fault detection and line inspection, insulator detection is a crucial task. However, considering the complex environment of high-voltage transmission lines, the traditional insulator detection accuracy is unsatisfactory, and manual inspection is dangerous and inefficient. To improve [...] Read more.
As the pre-part of tasks such as fault detection and line inspection, insulator detection is a crucial task. However, considering the complex environment of high-voltage transmission lines, the traditional insulator detection accuracy is unsatisfactory, and manual inspection is dangerous and inefficient. To improve this situation, this paper proposes an insulator detection model Siamese ID-YOLO based on a deep neural network. The model achieves the best balance between speed and accuracy compared with traditional detection methods. In order to achieve the purpose of image enhancement, this paper adopts the canny-based edge detection operator to highlight the edges of insulators to obtain more semantic information. In this paper, based on the Darknet53 network and Siamese network, the insulator original image and the edge image are jointly input into the model. Siamese IN-YOLO model achieves more fine-grained extraction of insulators through weight sharing between Siamese networks, thereby improving the detection accuracy of insulators. This paper uses statistical clustering analysis on the area and aspect ratio of the insulator data set, then pre-set and adjusts the hyperparameters of the model anchor box to make it more suitable for the insulator detection task. In addition, this paper makes an insulator dataset named InsuDaSet based on UAV(Unmanned Aerial Vehicle) shoot insulator images for model training. The experiments show that the insulator detection can reach 92.72% detection accuracy and 84FPS detection speed, which can fully meet the online insulator detection requirements. Full article
Show Figures

Figure 1

Figure 1
<p>The neck part of ID-YOLO based on FPN and PAN.</p>
Full article ">Figure 2
<p>The insulator detection model Siamese ID-YOLO includes the backbone, neck, head, and Siamese network. The backbone extracts feature information. The neck can solve the problem of the too significant size difference between the detected objects. The head used to predict the target category and box position is the last part of a detector. Based on the Siamese network, input edge information as gain information into Siamese ID-YOLO.</p>
Full article ">Figure 3
<p>Input, Original FM, and Augment FM represent the original insulator image, the feature map of the ID-YOLO output, and the feature map of the Siamese ID-YOLO output, respectively. The features of the insulators in the feature map output by Siamese ID-YOLO are more pronounced, which is beneficial to improving the effectiveness of insulator detection.</p>
Full article ">Figure 4
<p>Test results of each model on the InsuDaSet dataset. The experimental results show that the test accuracy of Siamese ID-YOLO is similar to that of Faster R-CNN, Vision Transformer, and it is better than the single-stage model YOLOv4, SSD, and CenterNet based on the heatmap.</p>
Full article ">
30 pages, 11233 KiB  
Article
Multibranch Unsupervised Domain Adaptation Network for Cross Multidomain Orchard Area Segmentation
by Ming Liu, Dong Ren, Hang Sun and Simon X. Yang
Remote Sens. 2022, 14(19), 4915; https://doi.org/10.3390/rs14194915 - 1 Oct 2022
Cited by 1 | Viewed by 1841
Abstract
Although unsupervised domain adaptation (UDA) has been extensively studied in remote sensing image segmentation tasks, most UDA models are designed based on single-target domain settings. Large-scale remote sensing images often have multiple target domains in practical applications, and the simple extension of single-target [...] Read more.
Although unsupervised domain adaptation (UDA) has been extensively studied in remote sensing image segmentation tasks, most UDA models are designed based on single-target domain settings. Large-scale remote sensing images often have multiple target domains in practical applications, and the simple extension of single-target UDA models to multiple target domains is unstable and costly. Multi-target unsupervised domain adaptation (MTUDA) is a more practical scenario that has great potential for solving the problem of crossing multiple domains in remote sensing images. However, existing MTUDA models neglect to learn and control the private features of the target domain, leading to missing information and negative migration. To solve these problems, this paper proposes a multibranch unsupervised domain adaptation network (MBUDA) for orchard area segmentation. The multibranch framework aligns multiple domain features, while preventing private features from interfering with training. We introduce multiple ancillary classifiers to help the model learn more robust latent target domain data representations. Additionally, we propose an adaptation enhanced learning strategy to reduce the distribution gaps further and enhance the adaptation effect. To evaluate the proposed method, this paper utilizes two settings with different numbers of target domains. On average, the proposed method achieves a high IoU gain of 7.47% over the baseline (single-target UDA), reducing costs and ensuring segmentation model performance in multiple target domains. Full article
Show Figures

Figure 1

Figure 1
<p>The different unsupervised domain adaptation scenarios in cross multidomain orchard area segmentation.</p>
Full article ">Figure 2
<p>The figure displays the sample images from the four datasets.</p>
Full article ">Figure 3
<p>Overall architecture of the proposed MBUDA. The framework is illustrated with K = 2 as example but it also holds for other numbers of target domains. The feature extractor is optimized with segmentation loss <math display="inline"><semantics> <mrow> <msubsup> <mi>L</mi> <mi>s</mi> <mi>k</mi> </msubsup> </mrow> </semantics></math>, ancillary segmentation loss <math display="inline"><semantics> <mrow> <msubsup> <mi>L</mi> <mi>a</mi> <mi>k</mi> </msubsup> </mrow> </semantics></math>, and adversarial loss <math display="inline"><semantics> <mrow> <msubsup> <mi>L</mi> <mrow> <mi>a</mi> <mi>d</mi> <mi>v</mi> </mrow> <mi>k</mi> </msubsup> </mrow> </semantics></math>. The discriminator is optimized with <math display="inline"><semantics> <mrow> <msubsup> <mi>L</mi> <mi>D</mi> <mi>k</mi> </msubsup> </mrow> </semantics></math>. The segmentation model is trained using different data, including data from the source domain (orange), target domain 1 (green), and target domain 2 (blue).</p>
Full article ">Figure 4
<p>The overall framework of the adaptation enhanced learning strategy.</p>
Full article ">Figure 5
<p>Outputs of orchard area segmentation in Dataset CY when adapting from Dataset ZG to Dataset CY and Dataset XT1.</p>
Full article ">Figure 6
<p>Outputs of orchard area segmentation in Dataset XT1 when adapting from Dataset ZG to Dataset CY and Dataset XT1.</p>
Full article ">Figure 7
<p>Outputs of orchard area segmentation in Dataset CY when adapting from Dataset ZG to Dataset CY, Dataset XT1, and Dataset XT2.</p>
Full article ">Figure 8
<p>Outputs of orchard area segmentation in Dataset XT1 when adapting from Dataset ZG to Dataset CY, Dataset XT1, and Dataset XT2.</p>
Full article ">Figure 9
<p>Outputs of orchard area segmentation in Dataset XT2 when adapting from Dataset ZG to Dataset CY, Dataset XT1, and Dataset XT2.</p>
Full article ">Figure A1
<p>Outputs of orchard area segmentation in Dataset CY when adapting from Dataset ZG to Dataset CY and Dataset XT2.</p>
Full article ">Figure A2
<p>Outputs of orchard area segmentation in DatasetXT2 when adapting from Dataset ZG to Dataset CY and Dataset XT2.</p>
Full article ">Figure A3
<p>Outputs of orchard area segmentation in Dataset XT1 when adapting from Dataset ZG to Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A4
<p>Outputs of orchard area segmentation in Dataset XT2 when adapting from Dataset ZG to Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A5
<p>Outputs of orchard area segmentation in Dataset XT1 when adapting from Dataset CY to Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A6
<p>Outputs of orchard area segmentation in Dataset XT2 when adapting from Dataset CY to Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A7
<p>Outputs of orchard area segmentation in Dataset ZG when adapting from Dataset CY to Dataset ZG, Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A8
<p>Outputs of orchard area segmentation in Dataset XT1 when adapting from Dataset CY to Dataset ZG, Dataset XT1 and Dataset XT2.</p>
Full article ">Figure A9
<p>Outputs of orchard area segmentation in Dataset XT2 when adapting from Dataset CY to Dataset ZG, Dataset XT1 and Dataset XT2.</p>
Full article ">
20 pages, 11207 KiB  
Article
Optimization of Remote Sensing Image Segmentation by a Customized Parallel Sine Cosine Algorithm Based on the Taguchi Method
by Fang Fan, Gaoyuan Liu, Jiarong Geng, Huiqi Zhao and Gang Liu
Remote Sens. 2022, 14(19), 4875; https://doi.org/10.3390/rs14194875 - 29 Sep 2022
Cited by 7 | Viewed by 2259
Abstract
Affected by solar radiation, atmospheric windows, radiation aberrations, and other air and sky environmental factors, remote sensing images usually contain a large amount of noise and suffer from problems such as non-uniform image feature density. These problems bring great difficulties to the segmentation [...] Read more.
Affected by solar radiation, atmospheric windows, radiation aberrations, and other air and sky environmental factors, remote sensing images usually contain a large amount of noise and suffer from problems such as non-uniform image feature density. These problems bring great difficulties to the segmentation of high-precision remote sensing image. To improve the segmentation effect of remote sensing images, this study adopted an improved metaheuristic algorithm to optimize the parameter settings of pulse-coupled neural networks (PCNNs). Using the Taguchi method, the optimal parallelism scheme of the algorithm was effectively tailored for a specific target problem. The blindness in the design of the algorithm parallel structure was effectively avoided. The superiority of the customized parallel SCA based on the Taguchi method (TPSCA) was demonstrated in tests with different types of benchmark functions. In this study, simulations were performed using IKONOS, GeoEye-1, and WorldView-2 satellite remote sensing images. The results showed that the accuracy of the proposed remote sensing image segmentation model was significantly improved. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Working principle diagram of the PCNN.</p>
Full article ">Figure 2
<p>Execution flow of Strategy 1.</p>
Full article ">Figure 3
<p>Execution flow of Strategy 2.</p>
Full article ">Figure 4
<p>Optimization results of the benchmark: (<b>a</b>) 2D position distribution; (<b>b</b>) Trajectory in the first dimension; (<b>c</b>) Average fitness; (<b>d</b>) Convergence curves.</p>
Full article ">Figure 4 Cont.
<p>Optimization results of the benchmark: (<b>a</b>) 2D position distribution; (<b>b</b>) Trajectory in the first dimension; (<b>c</b>) Average fitness; (<b>d</b>) Convergence curves.</p>
Full article ">Figure 5
<p>Signal-to-noise ratio main effect map: (<b>a</b>) the SNR main effect graph of unimodal function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mn>2</mn> </msub> </mrow> </semantics></math>; (<b>b</b>) the SNR main effect graph of multimodal function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mn>12</mn> </mrow> </msub> </mrow> </semantics></math>; (<b>c</b>) the SNR main effect graph of complex function <math display="inline"><semantics> <mrow> <msub> <mi>F</mi> <mrow> <mn>18</mn> </mrow> </msub> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>Flowchart of TPSCA–PCNN.</p>
Full article ">Figure 7
<p>Comparison of image segmentation effects before and after preprocessing: (<b>a</b>) before image preprocessing; (<b>b</b>) after image preprocessing.</p>
Full article ">Figure 8
<p>Segmentation results of IKONOS satellite remote sensing image 1: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 9
<p>Segmentation results of IKONOS satellite remote sensing image 2: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 10
<p>Segmentation results of GeoEye-1 satellite remote sensing image 3: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 11
<p>Segmentation results of GeoEye-1 satellite remote sensing image 4: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 12
<p>Segmentation results of WorldView-2 satellite remote sensing image 5: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) ELM segmentation result.</p>
Full article ">Figure 13
<p>Segmentation results of WorldView-2 satellite remote sensing image 6: (<b>a</b>) original remote sensing image; (<b>b</b>) TPSCA–PCNN segmentation result; (<b>c</b>) PCNN segmentation result; (<b>d</b>) EML segmentation result.</p>
Full article ">
23 pages, 18464 KiB  
Article
The Self-Supervised Spectral–Spatial Vision Transformer Network for Accurate Prediction of Wheat Nitrogen Status from UAV Imagery
by Xin Zhang, Liangxiu Han, Tam Sobeih, Lewis Lappin, Mark A. Lee, Andew Howard and Aron Kisdi
Remote Sens. 2022, 14(6), 1400; https://doi.org/10.3390/rs14061400 - 14 Mar 2022
Cited by 16 | Viewed by 6680
Abstract
Nitrogen (N) fertilizer is routinely applied by farmers to increase crop yields. At present, farmers often over-apply N fertilizer in some locations or at certain times because they do not have high-resolution crop N status data. N-use efficiency can be low, with the [...] Read more.
Nitrogen (N) fertilizer is routinely applied by farmers to increase crop yields. At present, farmers often over-apply N fertilizer in some locations or at certain times because they do not have high-resolution crop N status data. N-use efficiency can be low, with the remaining N lost to the environment, resulting in higher production costs and environmental pollution. Accurate and timely estimation of N status in crops is crucial to improving cropping systems’ economic and environmental sustainability. Destructive approaches based on plant tissue analysis are time consuming and impractical over large fields. Recent advances in remote sensing and deep learning have shown promise in addressing the aforementioned challenges in a non-destructive way. In this work, we propose a novel deep learning framework: a self-supervised spectral–spatial attention-based vision transformer (SSVT). The proposed SSVT introduces a Spectral Attention Block (SAB) and a Spatial Interaction Block (SIB), which allows for simultaneous learning of both spatial and spectral features from UAV digital aerial imagery, for accurate N status prediction in wheat fields. Moreover, the proposed framework introduces local-to-global self-supervised learning to help train the model from unlabelled data. The proposed SSVT has been compared with five state-of-the-art models including: ResNet, RegNet, EfficientNet, EfficientNetV2, and the original vision transformer on both testing and independent datasets. The proposed approach achieved high accuracy (0.96) with good generalizability and reproducibility for wheat N status estimation. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>a</b>) Experimental layout, with plots randomly allocated into four treatments, split into four blocks. Treatments were high (green), medium (blue) and low fertilizer rates (red), and an unfertilized control (yellow). (<b>b</b>) Images collected from different plots with different treatments.</p>
Full article ">Figure 2
<p>The collected images. The left one is collected near the ground. The right one is collected from the drone.</p>
Full article ">Figure 3
<p>The structure of the proposed spectral–spatial attention vision transformer (SSVT).</p>
Full article ">Figure 4
<p>Image augmentations demonstration.</p>
Full article ">Figure 5
<p>The flowchart of the self-supervised learning.</p>
Full article ">Figure 6
<p>General network structure and detailed block structure for CNN models.</p>
Full article ">Figure 7
<p>The Confusion matrix of the classification results achieved by the proposed SSVT (<b>a</b>) without and (<b>b</b>) with self-supervised learning.</p>
Full article ">Figure 8
<p>The model performance on independent drone datasets captured throughout all growing stages.</p>
Full article ">Figure 9
<p>The nitrogen status estimation result on drone images captured at the tillering and stem extension stage.</p>
Full article ">Figure 10
<p>(<b>a</b>) is the original image and (<b>b</b>) is the segmented image without soil. (<b>c</b>) shows the model performance with original and segmented image.</p>
Full article ">Figure 11
<p>Visualization of the attention map for ResNet and proposed SSVT. The middle column represents the attention map of ResNet. The right column represents the attention map of SSVT.</p>
Full article ">Figure 12
<p>(<b>a</b>) is the loss converge trend on self-supervised learning, (<b>b</b>) is t-SNE visualization of the model trained with the proposed SSL.</p>
Full article ">Figure 13
<p>Inference GPU memory usage of ResNet, original ViT, and proposed SSVT.</p>
Full article ">
18 pages, 29045 KiB  
Article
Integrating Remote Sensing and Meteorological Data to Predict Wheat Stripe Rust
by Chao Ruan, Yingying Dong, Wenjiang Huang, Linsheng Huang, Huichun Ye, Huiqin Ma, Anting Guo and Ruiqi Sun
Remote Sens. 2022, 14(5), 1221; https://doi.org/10.3390/rs14051221 - 2 Mar 2022
Cited by 12 | Viewed by 3824
Abstract
Wheat stripe rust poses a serious threat to wheat production. An effective prediction method is important for food security. In this study, we developed a prediction model for wheat stripe rust based on vegetation indices and meteorological features. First, based on time-series Sentinel-2 [...] Read more.
Wheat stripe rust poses a serious threat to wheat production. An effective prediction method is important for food security. In this study, we developed a prediction model for wheat stripe rust based on vegetation indices and meteorological features. First, based on time-series Sentinel-2 remote sensing images and meteorological data, wheat phenology (jointing date) was estimated using the harmonic analysis of time-series combined with average cumulative temperature. Then, vegetation indices were extracted based on phenological information. Meteorological features were screened using correlation analysis combined with independent t-test analysis. Finally, a random forest (RF) was used to construct a prediction model for wheat stripe rust. The results showed that the RF model using the input combination (phenological information-based vegetation indices and meteorological features) produced a higher prediction accuracy and a kappa coefficient of 88.7% and 0.772, respectively. The prediction model using phenological information-based vegetation indices outperformed the prediction model using single-date image-based vegetation indices, and the overall accuracy improved from 62.9% to 78.4%. These results indicated that the method combining phenological information-based vegetation indices and meteorological features can be used for wheat stripe rust prediction. The results of the prediction model can provide guidance and suggestions for disease prevention in the study area. Full article
Show Figures

Figure 1

Figure 1
<p>Geographic location of the study site and spatial distribution of field survey points and wheat-planting areas. (<b>a</b>) The distribution of field survey sites and wheat-planting areas in the study site; and (<b>b</b>) the location of the study site in Shaanxi Province.</p>
Full article ">Figure 2
<p>Flowchart of the prediction model for wheat stripe rust.</p>
Full article ">Figure 3
<p>Extraction of Day of Year (DOY) of regreening and jointing date according to the NDVI time-series curve. The green solid line indicates the raw NDVI time-series curve. The red dotted line indicates the reconstructed NDVI time-series curve. The blue dotted line indicates the first-order derivative of the NDVI time-series curve. The square indicates the regreening date. The circle indicates the jointing date and the triangle indicates the heading date.</p>
Full article ">Figure 4
<p>The process of extracting phenological information-based vegetation indices.</p>
Full article ">Figure 5
<p>Estimation results of wheat-jointing date.</p>
Full article ">Figure 6
<p>The mean and standard deviation of normalized vegetation indices. (<b>a</b>) Phenological information-based vegetation indices; and (<b>b</b>) single-date image-based vegetation indices. Blue bars indicate the mean values of healthy samples, orange bars indicate the mean values of stripe-rust-infected samples and error bars indicate the standard deviation.</p>
Full article ">Figure 6 Cont.
<p>The mean and standard deviation of normalized vegetation indices. (<b>a</b>) Phenological information-based vegetation indices; and (<b>b</b>) single-date image-based vegetation indices. Blue bars indicate the mean values of healthy samples, orange bars indicate the mean values of stripe-rust-infected samples and error bars indicate the standard deviation.</p>
Full article ">Figure 7
<p>The interpolation results of meteorological data in Qishan county. (<b>a</b>) Average temperature (ATEM; 0.1 °C) in December 2020; (<b>b</b>) average precipitation (PRE; 0.1 mm) in December 2020.</p>
Full article ">Figure 8
<p>Heatmap of correlation coefficients between meteorological features calculated using correlation analysis. Red represents a positive correlation between features, blue represents a negative correlation between features and the darker the color is, the higher the correlation between features.</p>
Full article ">Figure 9
<p>Prediction and mapping of wheat stripe rust in Qishan County based on the random forest (RF) model using phenological information-based vegetation indices combined with meteorological features (PIVIs + MFs). Orange represents stripe-rust-infested wheat, blue represents healthy wheat, red cross represents stripe-rust-infected samples and black cross represents healthy samples.</p>
Full article ">
21 pages, 4366 KiB  
Article
Dynamic Forecast of Desert Locust Presence Using Machine Learning with a Multivariate Time Lag Sliding Window Technique
by Ruiqi Sun, Wenjiang Huang, Yingying Dong, Longlong Zhao, Biyao Zhang, Huiqin Ma, Yun Geng, Chao Ruan, Naichen Xing, Xidong Chen and Xueling Li
Remote Sens. 2022, 14(3), 747; https://doi.org/10.3390/rs14030747 - 5 Feb 2022
Cited by 14 | Viewed by 3853
Abstract
Desert locust plagues can easily cause a regional food crisis and thus affect social stability. Preventive control of the disaster highlights the early detection of hopper gregarization before they form devastating swarms. However, the response of hopper band emergence to environmental fluctuation exhibits [...] Read more.
Desert locust plagues can easily cause a regional food crisis and thus affect social stability. Preventive control of the disaster highlights the early detection of hopper gregarization before they form devastating swarms. However, the response of hopper band emergence to environmental fluctuation exhibits a time lag. To realize the dynamic forecast of band occurrence with optimal temporal predictors, we proposed an SVM-based model with a temporal sliding window technique by coupling multisource time-series imagery with historical locust ground survey observations from between 2000–2020. The sliding window method was based on a lagging variable importance ranking used to analyze the temporal organization of environmental indicators in band-forming sequences and eventually facilitate the early prediction of band emergence. Statistical results show that hopper bands are more likely to occur within 41–64 days after increased rainfall; soil moisture dynamics increasing by approximately 0.05 m³/m³ then decreasing may enhance the chance of observing bands after 73–80 days. While sparse vegetation areas with NDVI increasing from 0.18 to 0.25 tend to witness bands after 17–40 days. The forecast model combining the optimal time lags of these dynamic indicators with other static indicators allows for a 16-day extended outlook of band presence in Somalia, Ethiopia, and Kenya. Monthly predictions from February to December 2020 display an overall accuracy of 77.46%, with an average ROC-AUC of 0.767 and a mean F-score close to 0.772. The multivariate forecast framework based on the lagging effect can realize the early warning of band presence in different spatiotemporal scenarios, supporting early decisions and response strategies for desert locust preventive management. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial and temporal distribution of ground points of the desert locust band used for this study in the SEK region. (<b>a</b>) The geographical location of SEK with band observations in 2000–2020; the red dot represents band presence while the blue triangle refer to surveyed-absence and the grey one indicates pseudo-absence. (<b>b</b>) Monthly count of bands from July 2019 to December 2020. (<b>c</b>) Monthly observations of Global Precipitation Measurement (GPM) V6 in the central SEK region for 20 years (2000–2020); the red line indicates monthly mean rainfall; the grey area indicates the fluctuation interval.</p>
Full article ">Figure 2
<p>Methods and process. (<b>a</b>) Extraction of dynamic and static indicators of desert locust presence. (<b>b</b>) Time lag variable importance ranking for dynamic indicators. (<b>c</b>) Forecast of desert locust presence based on multivariate machine learning model with a temporal sliding window.</p>
Full article ">Figure 3
<p>Illustration of time lag sliding window technique.</p>
Full article ">Figure 4
<p>Illustration of max TPR + TNR threshold for binary classification. (<b>a</b>) Frequency of band presence-absence distribution [<a href="#B77-remotesensing-14-00747" class="html-bibr">77</a>]; (<b>b</b>) sensitivity-specificity curve.</p>
Full article ">Figure 5
<p>Value distributions of dynamic indicators at different time-lagged variables. (<b>a</b>) Boxplot of PREC. (<b>b</b>) Boxplot of SM. (<b>c</b>) Boxplot of NDVI. (<b>d</b>) Boxplot of LST. The red box represents the value distributions of dynamic indicators in band presences while the green box refers to that in band absences.</p>
Full article ">Figure 6
<p>The normalized relative importance of time-lagged variables of each dynamic indicator. The green box refers to the optimal temporal sliding window. (<b>a</b>) Relative variable importance of PREC. (<b>b</b>) Relative variable importance of SM. (<b>c</b>) Relative variable importance of NDVI. (<b>d</b>) Relative variable importance of LST. The blue bar represents the relative variable importance of LR-DA while the orange box refers to that of RF-MDG.</p>
Full article ">Figure 7
<p>Dynamic predicted probability of observing desert locust band presence in SEK from February 2020 to December 2020. (<b>a</b>) February 2020, (<b>b</b>) March 2020, (<b>c</b>) April 2020, (<b>d</b>) May 2020, (<b>e</b>) June 2020, (<b>f</b>) July 2020, (<b>g</b>) August 2020, (<b>h</b>) September 2020, (<b>i</b>) October 2020, (<b>j</b>) November 2021, (<b>k</b>) December 2020.</p>
Full article ">Figure 8
<p>Dynamic projected areas with classified probabilities of band presence and monthly ground truths of band presence in SEK from February 2020 to December 2020. Areas were divided into low and average probability of band presence by statistics at max TPR + TNR of training models as dichotomous thresholds for model evaluation, while the high probability area partitioned by the first quartile was only used for better visualization and demonstration. (<b>a</b>) February 2020, (<b>b</b>) March 2020, (<b>c</b>) April 2020, (<b>d</b>) May 2020, (<b>e</b>) June 2020, (<b>f</b>) July 2020, (<b>g</b>) August 2020, (<b>h</b>) September 2020, (<b>i</b>) October 2020, (<b>j</b>) November 2020, (<b>k</b>) December 2020.</p>
Full article ">Figure 9
<p>(<b>a</b>) Relative importance and (<b>b</b>) contribution ratio of dynamic indicators in (<b>c</b>) desert locust life sequences of desert locust [<a href="#B12-remotesensing-14-00747" class="html-bibr">12</a>].</p>
Full article ">
22 pages, 8707 KiB  
Article
Novel CropdocNet Model for Automated Potato Late Blight Disease Detection from Unmanned Aerial Vehicle-Based Hyperspectral Imagery
by Yue Shi, Liangxiu Han, Anthony Kleerekoper, Sheng Chang and Tongle Hu
Remote Sens. 2022, 14(2), 396; https://doi.org/10.3390/rs14020396 - 15 Jan 2022
Cited by 46 | Viewed by 4632
Abstract
The accurate and automated diagnosis of potato late blight disease, one of the most destructive potato diseases, is critical for precision agricultural control and management. Recent advances in remote sensing and deep learning offer the opportunity to address this challenge. This study proposes [...] Read more.
The accurate and automated diagnosis of potato late blight disease, one of the most destructive potato diseases, is critical for precision agricultural control and management. Recent advances in remote sensing and deep learning offer the opportunity to address this challenge. This study proposes a novel end-to-end deep learning model (CropdocNet) for accurate and automated late blight disease diagnosis from UAV-based hyperspectral imagery. The proposed method considers the potential disease-specific reflectance radiation variance caused by the canopy’s structural diversity and introduces multiple capsule layers to model the part-to-whole relationship between spectral–spatial features and the target classes to represent the rotation invariance of the target classes in the feature space. We evaluate the proposed method with real UAV-based HSI data under controlled and natural field conditions. The effectiveness of the hierarchical features is quantitatively assessed and compared with the existing representative machine learning/deep learning methods on both testing and independent datasets. The experimental results show that the proposed model significantly improves accuracy when considering the hierarchical structure of spectral–spatial features, with average accuracies of 98.09% for the testing dataset and 95.75% for the independent dataset, respectively. Full article
Show Figures

Figure 1

Figure 1
<p>The experimental sites in Guyuan, Hebei province, China.</p>
Full article ">Figure 2
<p>The workflow of the CropdepcNet framework for potato late blight disease diagnosis (<span class="html-italic">k</span> is the spatial size of the convolutional kernel, <span class="html-italic">K</span> is the number of the channel of the convolutional kernel, <span class="html-italic">Z</span> is the dimensionality of the class-capsule, <span class="html-italic">N</span> is the number of the class-capsule, and <span class="html-italic">V</span> represents a vector of the high-level features).</p>
Full article ">Figure 3
<p>The model’s sensitivity to the depth of the convolutional filters. (<b>a</b>) Overall accuracy of using different <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> and <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> values with a fixed <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> of 16. (<b>b</b>) Overall accuracy of using different <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> values under the fixed <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> and <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> values of 128 and 64. Here, <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> is the depth of the 1D convolutional layers for the spectral feature extraction, <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>2</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> is the depth of the 3D convolutional layers for the spectral–spatial feature extraction, and <math display="inline"> <semantics> <msup> <mi>K</mi> <mrow> <mo>(</mo> <mn>3</mn> <mo>)</mo> </mrow> </msup> </semantics> </math> is the number of the capsule vector features.</p>
Full article ">Figure 4
<p>A comparison of (<b>a</b>) sensitivity and (<b>b</b>) specificity of each class from different models.</p>
Full article ">Figure 5
<p>A comparison of the classification maps for the independent testing dataset from four models. (<b>a</b>) RGB composition map of the raw data, (<b>b</b>–<b>e</b>) Classification maps of SVM, RF, 3D-CNN and the proposed CropdocNet model.</p>
Full article ">Figure 6
<p>The patch-scale test for the classification maps of healthy potato and potato late blight disease in (<b>a</b>) experimental site 1 and (<b>b</b>) experimental site 2. Here, the example patches on the right side illustrate the accuracy comparison between the ground truth (GT) investigations and the predicted levels (PL) of the late blight disease. Each value inside the patch represents the disease ratio (the late blight disease pixels/the total pixels).</p>
Full article ">Figure 7
<p>The visualized feature space and the mapping results of the healthy and diseased plots based on the different machine learning/deep learning methods: (<b>a</b>) the original RGB image for the healthy potato (H), diseased potato (D) and background (B). (<b>b</b>) The classification results and the visualized spectral feature space of SVM, (<b>c</b>) the classification results and the averages and the standard deviations of the activated high-level spectral–spatial features of 3D-CNN, and (<b>d</b>) the classification results and the visualized hierarchical capsule feature space of the proposed CropdocNet.</p>
Full article ">
27 pages, 24644 KiB  
Article
Spatiotemporal Evolution Analysis and Future Scenario Prediction of Rocky Desertification in a Subtropical Karst Region
by Chunhua Qian, Hequn Qiang, Changyou Qin, Zi Wang and Mingyang Li
Remote Sens. 2022, 14(2), 292; https://doi.org/10.3390/rs14020292 - 9 Jan 2022
Cited by 12 | Viewed by 2851
Abstract
Landscape change is a dynamic feature of landscape structure and function over time which is usually affected by natural and human factors. The evolution of rocky desertification is a typical landscape change that directly affects ecological environment governance and sustainable development. Guizhou is [...] Read more.
Landscape change is a dynamic feature of landscape structure and function over time which is usually affected by natural and human factors. The evolution of rocky desertification is a typical landscape change that directly affects ecological environment governance and sustainable development. Guizhou is one of the most typical subtropical karst landform areas in the world. Its special karst rocky desertification phenomenon is an important factor affecting the ecological environment and limiting sustainable development. In this paper, remote sensing imagery and machine learning methods are utilized to model and analyze the spatiotemporal variation of rocky desertification in Guizhou. Based on an improved CA-Markov model, rocky desertification scenarios in the next 30 years are predicted, providing data support for exploration of the evolution rule of rocky desertification in subtropical karst areas and for effective management. The specific results are as follows: (1) Based on the dynamic degree, transfer matrix, evolution intensity, and speed, the temporal and spatial evolution of rocky desertification in Guizhou from 2001 to 2020 was analyzed. It was found that the proportion of no rocky desertification (NRD) areas increased from 48.86% to 63.53% over this period. Potential rocky desertification (PRD), light rocky desertification (LRD), middle rocky desertification (MRD), and severe rocky desertification (SRD) continued to improve, with the improvement showing an accelerating trend after 2010. (2) An improved CA-Markov model was used to predict the future rocky desertification scenario; compared to the traditional CA-Markov model, the Lee–Sallee index increased from 0.681 to 0.723, and figure of merit (FOM) increased from 0.459 to 0.530. The conclusions of this paper are as follows: (1) From 2001 to 2020, the evolution speed of PRD was the fastest, while that of SRD was the slowest. Rocky desertification control should not only focus on areas with serious rocky desertification, but also prevent transformation from NRD to PRD. (2) Rocky desertification will continue to improve over the next 30 years. Possible deterioration areas are concentrated in high-altitude areas, such as the south of Bijie and the east of Liupanshui. Full article
Show Figures

Figure 1

Figure 1
<p>Research area and its digital elevation model (DEM) data.</p>
Full article ">Figure 2
<p>Rocky desertification distribution data. NRD: no rocky desertification, PRD: potential rocky desertification, LRD: light rocky desertification, MRD: medium rocky desertification, SRD: severe rocky desertification.</p>
Full article ">Figure 2 Cont.
<p>Rocky desertification distribution data. NRD: no rocky desertification, PRD: potential rocky desertification, LRD: light rocky desertification, MRD: medium rocky desertification, SRD: severe rocky desertification.</p>
Full article ">Figure 3
<p>Suitability factors. Legend of (c): (<b>a</b>) artificial surface; (<b>b</b>) primary soil; (<b>c</b>) semihydrogenous soil; (<b>d</b>) urban area; (<b>e</b>) rock; (<b>f</b>) river bars and islands; (<b>g</b>) leaching soil; (<b>h</b>) lake or reservoir; (<b>i</b>) lime soil; (<b>j</b>) iron bauxite.</p>
Full article ">Figure 3 Cont.
<p>Suitability factors. Legend of (c): (<b>a</b>) artificial surface; (<b>b</b>) primary soil; (<b>c</b>) semihydrogenous soil; (<b>d</b>) urban area; (<b>e</b>) rock; (<b>f</b>) river bars and islands; (<b>g</b>) leaching soil; (<b>h</b>) lake or reservoir; (<b>i</b>) lime soil; (<b>j</b>) iron bauxite.</p>
Full article ">Figure 4
<p>The processing workflow.</p>
Full article ">Figure 5
<p>The processing of factor weighting based on AHP.</p>
Full article ">Figure 6
<p>Workflow of improved CA-Markov prediction model.</p>
Full article ">Figure 7
<p>Rocky desertification evolution intensity (0: rocky desertification level no change; 1: NRD to PRD, PRD to LRD, LRD to MRD, MRD to SRD; 2: NRD to LRD, PRD to MRD, LRD to SRD; 3: NRD to MRD, PRD to SRD; 4: NRD to SRD; −1: PRD to NRD, LRD to PRD, MRD to LRD, SRD to MRD; −2: LRD to NRD, MRD to NRD, SRD to LRD; −3: MRD to NRD, SRD to PRD; −4: SRD to NRD).</p>
Full article ">Figure 8
<p>Transition matrices of rocky desertification evolution intensity.</p>
Full article ">Figure 9
<p>Distribution map of rocky desertification evolution (1: NRD, 2: PRD, 3: LRD, 4: MRD, 5: SRD).</p>
Full article ">Figure 10
<p>Rocky desertification evolution speed.</p>
Full article ">Figure 10 Cont.
<p>Rocky desertification evolution speed.</p>
Full article ">Figure 11
<p>Annual average evolution speed of rocky desertification.</p>
Full article ">Figure 12
<p>Remote sensing inversion result and prediction result for 2020.</p>
Full article ">Figure 13
<p>FOM result.</p>
Full article ">Figure 14
<p>Future scenario prediction of rocky desertification from 2025 to 2050.</p>
Full article ">Figure 14 Cont.
<p>Future scenario prediction of rocky desertification from 2025 to 2050.</p>
Full article ">Figure 15
<p>Amelioration and deterioration regions for rocky desertification future scenario from 2020 to 2030.</p>
Full article ">Figure 16
<p>Future scenario prediction of rocky desertification under different governance scenarios in 2030. (<b>a</b>) Historical evolution; (<b>b</b>) Major governance; (<b>c</b>) Complete governance.</p>
Full article ">Figure 16 Cont.
<p>Future scenario prediction of rocky desertification under different governance scenarios in 2030. (<b>a</b>) Historical evolution; (<b>b</b>) Major governance; (<b>c</b>) Complete governance.</p>
Full article ">
21 pages, 1899 KiB  
Article
A Random Forest Algorithm for Retrieving Canopy Chlorophyll Content of Wheat and Soybean Trained with PROSAIL Simulations Using Adjusted Average Leaf Angle
by Quanjun Jiao, Qi Sun, Bing Zhang, Wenjiang Huang, Huichun Ye, Zhaoming Zhang, Xiao Zhang and Binxiang Qian
Remote Sens. 2022, 14(1), 98; https://doi.org/10.3390/rs14010098 - 25 Dec 2021
Cited by 33 | Viewed by 4487
Abstract
Canopy chlorophyll content (CCC) is an important indicator for crop-growth monitoring and crop productivity estimation. The hybrid method, involving the PROSAIL radiative transfer model and machine learning algorithms, has been widely applied for crop CCC retrieval. However, PROSAIL’s homogeneous canopy hypothesis limits the [...] Read more.
Canopy chlorophyll content (CCC) is an important indicator for crop-growth monitoring and crop productivity estimation. The hybrid method, involving the PROSAIL radiative transfer model and machine learning algorithms, has been widely applied for crop CCC retrieval. However, PROSAIL’s homogeneous canopy hypothesis limits the ability to use the PROSAIL-based CCC estimation across different crops with a row structure. In addition to leaf area index (LAI), average leaf angle (ALA) is the most important canopy structure factor in the PROSAIL model. Under the same LAI, adjustment of the ALA can make a PROSAIL simulation obtain the same canopy gap as the heterogeneous canopy at a specific observation angle. Therefore, parameterization of an adjusted ALA (ALAadj) is an optimal choice to make the PROSAIL model suitable for specific row-planted crops. This paper attempted to improve PROSAIL-based CCC retrieval for different crops, using a random forest algorithm, by introducing the prior knowledge of crop-specific ALAadj. Based on the field reflectance spectrum at nadir, leaf area index, and leaf chlorophyll content, parameterization of the ALAadj in the PROSAIL model for wheat and soybean was carried out. An algorithm integrating the random forest and PROSAIL simulations with prior ALAadj information was developed for wheat and soybean CCC retrieval. Ground-measured CCC measurements were used to validate the CCC retrieved from canopy spectra. The results showed that the ALAadj values (62 degrees for wheat; 45 degrees for soybean) that were parameterized for the PROSAIL model demonstrated good discrimination between the two crops. The proposed algorithm improved the CCC retrieval accuracy for wheat and soybean, regardless of whether continuous visible to near-infrared spectra with 50 bands (RMSE from 39.9 to 32.9 μg cm−2; R2 from 0.67 to 0.76) or discrete spectra with 13 bands (RMSE from 43.9 to 33.7 μg cm−2; R2 from 0.63 to 0.74) and nine bands (RMSE from 45.1 to 37.0 μg cm−2; R2 from 0.61 to 0.71) were used. The proposed hybrid algorithm, based on PROSAIL simulations with ALAadj, has the potential for satellite-based CCC estimation across different crop types, and it also has a good reference value for the retrieval of other crop parameters. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Workflow of crop canopy chlorophyll content (CCC) retrieval and validation methods used in the present study. The prior knowledge of an adjusted average leaf angle (ALA<sub>adj</sub>) is involved in the PROSAIL-based CCC retrieval model using the random forest regression (RFR) algorithm.</p>
Full article ">Figure 2
<p>Conceptual effect of ALA<sub>adj</sub> in (<b>c</b>) on bridging the difference in soil gap fraction cover between (<b>a</b>) a homogeneous canopy case and (<b>b</b>) a heterogeneous canopy case with the same leaf area index (LAI). LAI = LA/A.</p>
Full article ">Figure 3
<p>Variable importance analysis for canopy reflectance at different wavelengths in the two cases of a homogeneous canopy and vegetation-soil mixed pixels.</p>
Full article ">Figure 4
<p>Scatter plots of the ground-measured CCC and the CCC predicted by ground-measured canopy spectra, based on the RFR models (<b>a</b>–<b>c</b>) without and (<b>d</b>) with prior ALA<sub>adj</sub> knowledge related to crop types. (<b>a</b>) NonPrior-ALA_45 mode; (<b>b</b>) NonPrior-ALA_62 model; (<b>c</b>) NonPrior-ALA_45_62 model; (<b>d</b>) Prior-ALA_45_62 model.</p>
Full article ">Figure 5
<p>Scatter plots of the ground-measured and predicted CCC based on the RFR model using continuous and discrete spectra. Above: results of NonPrior-ALA_45_62 model using (<b>a</b>) 50 bands, (<b>b</b>) 13 bands, and (<b>c</b>) nine bands. Below: results of Prior-ALA_45_62 model using (<b>d</b>) 50 bands, (<b>e</b>) 13 bands, and (<b>f</b>) nine bands.</p>
Full article ">Figure 6
<p>Accuracy of CCC retrieval characterized by LAIs for wheat and soybean using ground-measured continuous spectra (<b>a</b>) without and (<b>b</b>) with considering prior ALA<sub>adj</sub> knowledge related to crop types. (<b>a</b>) NonPrior-ALA_45_62 model; (<b>b</b>) Prior-ALA_45_62 model.</p>
Full article ">Figure 7
<p>Applicability of the ALA<sub>adj</sub> parameterization model for nadir observation to different view zenith angles. The test scenario is a homogeneous canopy simulated by the PROSAIL model, with variations in the view zenith angle and a fixed ALA of 45 degrees. Here, the fixed ALA is equal to the ALA<sub>adj</sub> truth value.</p>
Full article ">
21 pages, 8842 KiB  
Article
High-Resolution Gridded Livestock Projection for Western China Based on Machine Learning
by Xianghua Li, Jinliang Hou and Chunlin Huang
Remote Sens. 2021, 13(24), 5038; https://doi.org/10.3390/rs13245038 - 11 Dec 2021
Cited by 18 | Viewed by 4428
Abstract
Accurate high-resolution gridded livestock distribution data are of great significance for the rational utilization of grassland resources, environmental impact assessment, and the sustainable development of animal husbandry. Traditional livestock distribution data are collected at the administrative unit level, which does not provide a [...] Read more.
Accurate high-resolution gridded livestock distribution data are of great significance for the rational utilization of grassland resources, environmental impact assessment, and the sustainable development of animal husbandry. Traditional livestock distribution data are collected at the administrative unit level, which does not provide a sufficiently detailed geographical description of livestock distribution. In this study, we proposed a scheme by integrating high-resolution gridded geographic data and livestock statistics through machine learning regression models to spatially disaggregate the livestock statistics data into 1 km × 1 km spatial resolution. Three machine learning models, including support vector machine (SVM), random forest (RF), and deep neural network (DNN), were constructed to represent the complex nonlinear relationship between various environmental factors (e.g., land use practice, topography, climate, and socioeconomic factors) and livestock density. By applying the proposed method, we generated a set of 1 km × 1 km spatial distribution maps of cattle and sheep for western China from 2000 to 2015 at five-year intervals. Our projected cattle and sheep distribution maps reveal the spatial heterogeneity structures and change trend of livestock distribution at the grid level from 2000 to 2015. Compared with the traditional census livestock density, the gridded livestock distribution based on DNN has the highest accuracy, with the determinant coefficient (R2) of 0.75, root mean square error (RMSE) of 9.82 heads/km2 for cattle, and the R2 of 0.73, RMSE of 31.38 heads/km2 for sheep. The accuracy of the RF is slightly lower than the DNN but higher than the SVM. The projection accuracy of the three machine learning models is superior to those of the published Gridded Livestock of the World (GLW) datasets. Consequently, deep learning has the potential to be an effective tool for high-resolution gridded livestock projection by combining geographic and census data. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location and land use and land cover of the study area.</p>
Full article ">Figure 2
<p>Flowchart of the livestock spatialization process.</p>
Full article ">Figure 3
<p>Cattle distribution in six provinces of Western China. The first column is the cattle distribution density obtained from the county level census, and the second to fourth columns are the density at the 1 km scale estimated by SVR, RF, and DNN, respectively. The first to fourth rows indicate 2000, 2005, 2010, and 2015, respectively.</p>
Full article ">Figure 4
<p>Sheep distribution in six provinces of western China. The first column is the sheep distribution density obtained from the county level census, and the second to fourth columns are the density at the 1 km scale estimated by SVR, RF, and DNN, respectively. The first to fourth rows indicate 2000, 2005, 2010, and 2015, respectively.</p>
Full article ">Figure 5
<p>Enlarged spatial detail distribution of cattle for two randomly selected small local regions A and B. (<b>a</b>–<b>d</b>) are the land use situation and the spatialized cattle results of the SVM, RF, and DNN models of region A in 2015. (<b>e</b>–<b>h</b>) are the land use situation and the spatialized cattle results of the SVM, RF, and DNN models of region B in 2015.</p>
Full article ">Figure 6
<p>Enlarged spatial detail distribution of sheep for two randomly selected small local regions A and B. (<b>a</b>–<b>d</b>) are the land use situation and the spatialized sheep results of the SVM, RF, and DNN models of region A in 2015. (<b>e</b>–<b>h</b>) are the land use situation and the spatialized sheep results of the SVM, RF, and DNN models of region B in 2015.</p>
Full article ">Figure 7
<p>Accuracy of the livestock spatialization results. The distribution density of (<b>a</b>) cattle and (<b>b</b>) sheep estimated by the model on the 1 km scale was aggregated to the county scale and compared with the census data.</p>
Full article ">Figure 8
<p>Spatiotemporal changes of cattle based on the DNN estimation. (<b>a</b>–<b>c</b>) represent 2000 to 2005, 2005 to 2010, and 2010 to 2015, respectively. The bar charts show the statistical value of cattle in each province.</p>
Full article ">Figure 9
<p>Spatiotemporal changes of sheep based on the DNN estimation. (<b>a</b>–<b>c</b>) represent 2000 to 2005, 2005 to 2010, and 2010 to 2015, respectively. The bar charts show the statistical value of cattle in each province.</p>
Full article ">Figure 10
<p>The scatter diagram of grided distribution density aggregated to the county scale and census data for cattle. (<b>a</b>) SVR; (<b>b</b>) RF; (<b>c</b>) DNN; (<b>d</b>) GLW2; and (<b>e</b>) GLW3. The red line is the linear regression line, and the dotted line is the 1:1 line.</p>
Full article ">Figure 11
<p>The scatter diagram of grided distribution density aggregated to the county scale and census data for sheep. (<b>a</b>) SVR; (<b>b</b>) RF; (<b>c</b>) DNN; (<b>d</b>) GLW2; and (<b>e</b>) GLW3. The red line is the linear regression line, and the dotted line is the 1:1 line.</p>
Full article ">Figure 12
<p>The spatial distribution density of cattle in 2000.</p>
Full article ">Figure 13
<p>The spatial distribution density of sheep in 2000.</p>
Full article ">Figure 14
<p>Correlation between environmental factors and density of cattle and sheep. The shape direction of the ellipse in the upper triangular area represents the positive or negative of the correlation, the color is the level of the corresponding correlation, and the lower triangular area is the value of the corresponding correlation coefficient.</p>
Full article ">Figure 15
<p>Importance of environmental factors influencing the spatial distribution of cattle and sheep.</p>
Full article ">
20 pages, 28343 KiB  
Article
Estimating Vertical Distribution of Leaf Water Content within Wheat Canopies after Head Emergence
by Weiping Kong, Wenjiang Huang, Lingling Ma, Lingli Tang, Chuanrong Li, Xianfeng Zhou and Raffaele Casa
Remote Sens. 2021, 13(20), 4125; https://doi.org/10.3390/rs13204125 - 14 Oct 2021
Cited by 7 | Viewed by 2646
Abstract
Monitoring vertical profile of leaf water content (LWC) within wheat canopies after head emergence is vital significant for increasing crop yield. However, the estimation of vertical distribution of LWC from remote sensing data is still challenging due to the effects of wheat spikes [...] Read more.
Monitoring vertical profile of leaf water content (LWC) within wheat canopies after head emergence is vital significant for increasing crop yield. However, the estimation of vertical distribution of LWC from remote sensing data is still challenging due to the effects of wheat spikes and the efficacy of sensor measurement from the nadir direction. Using two-year field experiments with different growth stages after head emergence, N rates, wheat cultivars, we investigated the vertical distribution of LWC within canopies, the changes of canopy reflectance after spikes removal, the relationship between spectral indices and LWC in the upper-, middle- and bottom-layer. The interrelationship among vertical LWC were constructed, and four ratio of reflectance difference (RRD) type of indices were proposed based on the published WI and NDWSI indices to determine vertical distribution of LWC. The results indicated a bell shape distribution of LWC in wheat plants with the highest value appeared at the middle layer, and significant linear correlations between middle-LWC vs. upper-LWC and middle-LWC vs. bottom-LWC (r ? 0.92) were identified. The effects of wheat spikes on spectral reflectance mainly occurred in near infrared to shortwave infrared regions, which then decreased the accuracy of LWC estimation. Spectral indices at the middle layer outperformed the other two layers in LWC assessment and were less susceptible to wheat spikes effects, in particular, the newly proposed narrow-band WI-4 and NDWSI-4 indices exhibited great potential in tracking the changes of middle-LWC (R2 = 0.82 and 0.84, respectively). By taking into account the effects of wheat spikes and the interrelationship of vertical LWC within canopies, an indirect induction strategy was developed for modeling the upper-LWC and bottom-LWC. It was found that the indirect induction models based on the WI-4 and NDWSI-4 indices were more effective than the models obtained from conventional direct estimation method, with R2 of 0.78 and 0.81 for the upper-LWC estimation, and 0.75 and 0.74 for the bottom-LWC estimation, respectively. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The location of the study area.</p>
Full article ">Figure 2
<p>The schematic diagram of vertical division of leaves within wheat canopy.</p>
Full article ">Figure 3
<p>Vertical profiles of LWC within wheat canopies (<b>a</b>) at different growth stages; (<b>b</b>) under different N treatments at the head emergence stage (Z54), error bars represent standard deviation of vertical LWC measurements; the relationships between (<b>c</b>) the middle-LWC vs. the upper-LWC and (<b>d</b>) the middle-LWC vs. the bottom-LWC for both experiments.</p>
Full article ">Figure 4
<p>Spectral reflectance of the entire wheat canopy and the canopy without spikes (<b>a</b>) at different growth stages under the N300 treatment, and (<b>b</b>) under different N treatments at the milk-filling stage (Z73).</p>
Full article ">Figure 5
<p>The relative variation rate (<span class="html-italic">R<sub>v</sub></span>) of R<sup>2</sup> of relationships between published spectral indices and LWC in the upper-, middle- and bottom-layer before and after removing spikes.</p>
Full article ">Figure 6
<p>The R<sup>2</sup> curves between middle-LWC vs. (<b>a</b>) the WI-3 and (<b>b</b>) NDWSI-3 indices when using each waveband over 400–2500 nm as the third band (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mn>3</mn> </mrow> </semantics></math>); R<sup>2</sup> contour maps between middle-LWC vs. (<b>c</b>) the WI-4 and (<b>d</b>) NDWSI-4 indices when using all possible combinations over 400–2500 nm as the third (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mn>3</mn> </mrow> </semantics></math>) and forth bands (<math display="inline"><semantics> <mrow> <mi>λ</mi> <mn>4</mn> </mrow> </semantics></math>).</p>
Full article ">Figure 7
<p>Scattering plots of relationships between the optimal RRD type of indices/the corresponding published spectral indices and the middle-LWC for the entire canopy. Subplots (<b>a</b>–<b>c</b>) show the relationships between the WI, WI-3, WI-4 and middle-LWC respectively, subplots (<b>d</b>–<b>f</b>) show the relationships between the NDWSI, NDWSI-3, NDWSI-4 and middle-LWC respectively.</p>
Full article ">Figure 8
<p>Validation of estimation models derived from the WI-4, NDWSI-4 and NDWSI for the middle-LWC (<b>green point</b>), the upper-LWC (<b>blue diamond</b>) and the bottom-LWC (<b>black star</b>). The black solid lines indicate linear fits, black dash lines indicate 1:1 line, red dash lines indicate the 95% confidence intervals of prediction.</p>
Full article ">Figure 9
<p>Comparison of validation models between measured LWC and estimated LWC in the upper-layer (<b>blue diamond</b> and <b>red</b> <b>diamond</b>) and bottom-layer (<b>black star</b> and <b>red star</b>) using the indirect induction and direct estimation methods. The four subplots in the first line are validation results based on the WI-4 for the upper- and bottom-layer, whereas the four subplots in the second line are validation results based on the NDWSI-4 for the upper- and bottom-layer.</p>
Full article ">
22 pages, 6796 KiB  
Article
Three-Dimensional Convolutional Neural Network Model for Early Detection of Pine Wilt Disease Using UAV-Based Hyperspectral Images
by Run Yu, Youqing Luo, Haonan Li, Liyuan Yang, Huaguo Huang, Linfeng Yu and Lili Ren
Remote Sens. 2021, 13(20), 4065; https://doi.org/10.3390/rs13204065 - 11 Oct 2021
Cited by 51 | Viewed by 4519
Abstract
As one of the most devastating disasters to pine forests, pine wilt disease (PWD) has caused tremendous ecological and economic losses in China. An effective way to prevent large-scale PWD outbreaks is to detect and remove the damaged pine trees at the early [...] Read more.
As one of the most devastating disasters to pine forests, pine wilt disease (PWD) has caused tremendous ecological and economic losses in China. An effective way to prevent large-scale PWD outbreaks is to detect and remove the damaged pine trees at the early stage of PWD infection. However, early infected pine trees do not show obvious changes in morphology or color in the visible wavelength range, making early detection of PWD tricky. Unmanned aerial vehicle (UAV)-based hyperspectral imagery (HI) has great potential for early detection of PWD. However, the commonly used methods, such as the two-dimensional convolutional neural network (2D-CNN), fail to simultaneously extract and fully utilize the spatial and spectral information, whereas the three-dimensional convolutional neural network (3D-CNN) is able to collect this information from raw hyperspectral data. In this paper, we applied the residual block to 3D-CNN and constructed a 3D-Res CNN model, the performance of which was then compared with that of 3D-CNN, 2D-CNN, and 2D-Res CNN in identifying PWD-infected pine trees from the hyperspectral images. The 3D-Res CNN model outperformed the other models, achieving an overall accuracy (OA) of 88.11% and an accuracy of 72.86% for detecting early infected pine trees (EIPs). Using only 20% of the training samples, the OA and EIP accuracy of 3D-Res CNN can still achieve 81.06% and 51.97%, which is superior to the state-of-the-art method in the early detection of PWD based on hyperspectral images. Collectively, 3D-Res CNN was more accurate and effective in early detection of PWD. In conclusion, 3D-Res CNN is proposed for early detection of PWD in this paper, making the prediction and control of PWD more accurate and effective. This model can also be applied to detect pine trees damaged by other diseases or insect pests in the forest. Full article
Show Figures

Figure 1

Figure 1
<p>The global distribution of pine wilt disease (data source: <a href="https://www.cabi.org/ISC" target="_blank">https://www.cabi.org/ISC</a>; accessed on 16 August 2021).</p>
Full article ">Figure 2
<p>The process of pine wood nematode infection.</p>
Full article ">Figure 3
<p>(<b>a</b>) Distribution of pine wilt disease (PWD) and major pine species susceptible to PWD in China. (<b>b</b>) PWD has caused a large area of death of pine forests in northern China.</p>
Full article ">Figure 4
<p>The morphology of <span class="html-italic">Monochamus saltuarius</span> and the pine wood nematode (PWN). (<b>a</b>) Male adult PWN; (<b>b</b>) head of the male PWN; (<b>c</b>) tail of the male PWN; (<b>d</b>) female adult; (<b>e</b>) head of the female PWN; (<b>f</b>) vulva of the female PWN; (<b>g</b>) tail of the female PWN.</p>
Full article ">Figure 5
<p>The overall workflow of the study.</p>
Full article ">Figure 6
<p>Study area. (<b>a</b>) Location of the study area; (<b>b</b>) the hyperspectral image (in false color composition) and location of sample plots; (<b>c</b>) the hyperspectral cube of the corresponding area from the hyperspectral image.</p>
Full article ">Figure 7
<p>The UAV-based hyperspectral and LiDAR system.</p>
Full article ">Figure 8
<p>UAV images of pine trees from the early (May) to the late stage (July) of PWD infection.</p>
Full article ">Figure 9
<p>The architecture of the 3D-Res CNN model, which includes four convolution layers, two max pooling layers, and two residual blocks. Conv stands for convolutional layer, ReLu stand for the rectified linear unit.</p>
Full article ">Figure 10
<p>Training, validation, and testing samples of each tree category with the true labels.</p>
Full article ">Figure 11
<p>The reflectance curve of broad-leaved trees, early infected pine trees, and late infected pine trees.</p>
Full article ">Figure 12
<p>The classification results of three tree categories in the study area using the four models.</p>
Full article ">Figure 13
<p>Confusion matrices for the three tree categories using different models, where A, B, and C respectively represent broad-leaved trees, early infected, and late infected pine trees.</p>
Full article ">Figure 14
<p>Classification performance of the 3D-Res CNN model using different training sample sizes.</p>
Full article ">
16 pages, 26173 KiB  
Article
Using UAV-Based Hyperspectral Imagery to Detect Winter Wheat Fusarium Head Blight
by Huiqin Ma, Wenjiang Huang, Yingying Dong, Linyi Liu and Anting Guo
Remote Sens. 2021, 13(15), 3024; https://doi.org/10.3390/rs13153024 - 1 Aug 2021
Cited by 37 | Viewed by 4786
Abstract
Fusarium head blight (FHB) is a major winter wheat disease in China. The accurate and timely detection of wheat FHB is vital to scientific field management. By combining three types of spectral features, namely, spectral bands (SBs), vegetation indices (VIs), and wavelet features [...] Read more.
Fusarium head blight (FHB) is a major winter wheat disease in China. The accurate and timely detection of wheat FHB is vital to scientific field management. By combining three types of spectral features, namely, spectral bands (SBs), vegetation indices (VIs), and wavelet features (WFs), in this study, we explore the potential of using hyperspectral imagery obtained from an unmanned aerial vehicle (UAV), to detect wheat FHB. First, during the wheat filling period, two UAV-based hyperspectral images were acquired. SBs, VIs, and WFs that were sensitive to wheat FHB were extracted and optimized from the two images. Subsequently, a field-scale wheat FHB detection model was formulated, based on the optimal spectral feature combination of SBs, VIs, and WFs (SBs + VIs + WFs), using a support vector machine. Two commonly used data normalization algorithms were utilized before the construction of the model. The single WFs, and the spectral feature combination of optimal SBs and VIs (SBs + VIs), were respectively used to formulate models for comparison and testing. The results showed that the detection model based on the normalized SBs + VIs + WFs, using min–max normalization algorithm, achieved the highest R2 of 0.88 and the lowest RMSE of 2.68% among the three models. Our results suggest that UAV-based hyperspectral imaging technology is promising for the field-scale detection of wheat FHB. Combining traditional SBs and VIs with WFs can improve the detection accuracy of wheat FHB effectively. Full article
Show Figures

Figure 1

Figure 1
<p>Experimental area location and field survey sampling positions.</p>
Full article ">Figure 2
<p>Methodological framework of wheat FHB detection model.</p>
Full article ">Figure 3
<p>Disease-sensitive wavelet regions extracted from UAV hyperspectral imagery.</p>
Full article ">Figure 4
<p><span class="html-italic">R</span><sup>2</sup> among different features of each spectral feature type: (<b>a</b>) SBs, (<b>b</b>) VIs, and (<b>c</b>) WFs.</p>
Full article ">Figure 4 Cont.
<p><span class="html-italic">R</span><sup>2</sup> among different features of each spectral feature type: (<b>a</b>) SBs, (<b>b</b>) VIs, and (<b>c</b>) WFs.</p>
Full article ">Figure 5
<p>Scatter plots of measured DER versus estimated DER of SVM detection models based on (<b>a</b>–<b>c</b>) initial spectral feature combinations of SBs, VIs, and WFs; (<b>d</b>–<b>f</b>) spectral feature combinations of SBs, VIs, and WFs normalized using MMN algorithm; and (<b>g</b>–<b>i</b>) spectral feature combinations of SBs, VIs, and WFs normalized using ZSN algorithm.</p>
Full article ">Figure 6
<p>Damage maps of wheat FHB on (<b>a</b>) 3 May and (<b>b</b>) 8 May 2019, obtained using the normalized (SBs + VIs + WFs)-based SVM detection model.</p>
Full article ">Figure 6 Cont.
<p>Damage maps of wheat FHB on (<b>a</b>) 3 May and (<b>b</b>) 8 May 2019, obtained using the normalized (SBs + VIs + WFs)-based SVM detection model.</p>
Full article ">
Back to TopTop