[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,614)

Search Parameters:
Keywords = crop classification

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4470 KiB  
Article
A CARS-SPA-GA Feature Wavelength Selection Method Based on Hyperspectral Imaging with Potato Leaf Disease Classification
by Xue Li, Xueliang Fu and Honghui Li
Sensors 2024, 24(20), 6566; https://doi.org/10.3390/s24206566 (registering DOI) - 12 Oct 2024
Abstract
Early blight and ladybug beetle infestation are important factors threatening potato yields. The current research on disease classification using the spectral differences between the healthy and disease-stressed leaves of plants has achieved good progress in a variety of crops, but less research has [...] Read more.
Early blight and ladybug beetle infestation are important factors threatening potato yields. The current research on disease classification using the spectral differences between the healthy and disease-stressed leaves of plants has achieved good progress in a variety of crops, but less research has been conducted on early blight in potato. This paper proposes a CARS-SPA-GA feature selection method. First, the raw spectral data of potato leaves in the visible/near-infrared light region were preprocessed. Then, the feature wavelengths were selected via competitive adaptive reweighted sampling (CARS) and the successive projection algorithm (SPA), respectively. Then, the two sets of wavelengths were reorganized and duplicates were removed, and secondary feature selection was conducted with genetic algorithm (GA). Finally, the feature wavelengths were fed into different classifiers and the parameters were optimized using a real-coded genetic algorithm (RCGA). The experimental results show that the feature wavelengths selected by the CARS-SPA-GA method accounted only for 9% of the full band, and the classification accuracy of the RCGA-optimized support vector machine (SVM) classification model reached 98.366%. These results show that it is feasible to classify early blight and ladybug beetle infestation in potato using visible/near-infrared spectral data, and the CARS-SPA-GA method can substantially improve the accuracy and detection efficiency of potato pest and disease classification. Full article
(This article belongs to the Section Optical Sensors)
Show Figures

Figure 1

Figure 1
<p>Potato leaves under disease and insect pest stress. (<b>a</b>) early blight infestation; (<b>b</b>) ladybird beetle infestation.</p>
Full article ">Figure 2
<p>Hyperspectral imaging system.</p>
Full article ">Figure 3
<p>A schematic diagram of the ROI selection process.</p>
Full article ">Figure 4
<p>The CARS-SPA-GA feature selection flow diagram.</p>
Full article ">Figure 5
<p>Flowchart of RCGA in optimizing the parameters of an SVM model.</p>
Full article ">Figure 6
<p>Comparison of spectral curves of three types of potato leaves.</p>
Full article ">Figure 7
<p>Comparison of effectiveness enhancements of MSC preprocessing methods on different classifiers.</p>
Full article ">Figure 8
<p>Number of wavelengths and RMSECV in CARS feature wavelength selection.</p>
Full article ">Figure 9
<p>Distribution of feature wavelengths selected by different algorithms: (<b>a</b>) CARS; (<b>b</b>) SPA; (<b>c</b>) GA; (<b>d</b>) CARS vs. SPA.</p>
Full article ">Figure 10
<p>Distribution of feature wavelengths selected by CARS-SPA-GA.</p>
Full article ">Figure 11
<p>Confusion matrix of feature wavelengths selected by the four algorithms output in the RCGA-SVM model: (<b>a</b>) CARS; (<b>b</b>) SPA; (<b>c</b>) GA; (<b>d</b>) CARS-SPA-GA.</p>
Full article ">Figure 12
<p>Confusion matrix of feature wavelengths selected by the four algorithms output in the RCGA-RF model: (<b>a</b>) CARS; (<b>b</b>) SPA; (<b>c</b>) GA; (<b>d</b>) CARS-SPA-GA.</p>
Full article ">Figure 12 Cont.
<p>Confusion matrix of feature wavelengths selected by the four algorithms output in the RCGA-RF model: (<b>a</b>) CARS; (<b>b</b>) SPA; (<b>c</b>) GA; (<b>d</b>) CARS-SPA-GA.</p>
Full article ">Figure 13
<p>Confusion matrix of feature wavelengths selected by the four algorithms output in the RCGA-MLP model: (<b>a</b>) CARS; (<b>b</b>) SPA; (<b>c</b>) GA; (<b>d</b>) CARS-SPA-GA.</p>
Full article ">
16 pages, 3804 KiB  
Article
Detection of Mechanical Damage in Corn Seeds Using Hyperspectral Imaging and the ResNeSt_E Deep Learning Network
by Hua Huang, Yinfeng Liu, Shiping Zhu, Chuan Feng, Shaoqi Zhang, Lei Shi, Tong Sun and Chao Liu
Agriculture 2024, 14(10), 1780; https://doi.org/10.3390/agriculture14101780 - 10 Oct 2024
Abstract
Corn is one of the global staple grains and the largest grain crop in China. During harvesting, grain separation, and corn production, corn is susceptible to mechanical damage including surface cracks, internal cracks, and breakage. However, the internal cracks are difficult to observe. [...] Read more.
Corn is one of the global staple grains and the largest grain crop in China. During harvesting, grain separation, and corn production, corn is susceptible to mechanical damage including surface cracks, internal cracks, and breakage. However, the internal cracks are difficult to observe. In this study, hyperspectral imaging was used to detect mechanical damage in corn seeds. The corn seeds were divided into four categories that included intact, broken, internally cracked, and surface-crackedtv. This study compared three feature extraction methods, including principal component analysis (PCA), kernel PCA (KPCA), and factor analysis (FA), as well as a joint feature extraction method consisting of a combination of these methods. The dimensionality reduction results of the three methods (FA + KPCA, KPCA + FA, and PCA + FA) were combined to form a new combined dataset and improve the classification. We then compared the effects of six classification models (ResNet, ShuffleNet-V2, MobileNet-V3, ResNeSt, EfficientNet-V2, and MobileNet-V4) and proposed a ResNeSt_E network based on the ResNeSt and efficient multi-scale attention modules. The accuracy of ResNeSt_E reached 99.0%, and this was 0.4% higher than that of EfficientNet-V2 and 0.7% higher than that of ResNeSt. Additionally, the number of parameters and memory requirements were reduced and the frames per second were improved. We compared two dimensionality reduction methods: KPCA + FA and PCA + FA. The classification accuracies of the two methods were the same; however, PCA + FA was much more efficient than KPCA + FA and was more suitable for practical detection. The ResNeSt_E network could detect both internal and surface cracks in corn seeds, making it suitable for mobile terminal applications. The results demonstrated that detecting mechanical damage in corn seeds using hyperspectral images was possible. This study provides a reference for mechanical damage detection methods for corn. Full article
Show Figures

Figure 1

Figure 1
<p>Hyperspectral true-color images of corn seeds: (<b>a</b>) IN, (<b>b</b>) BR, (<b>c</b>) IC, and (<b>d</b>) SC.</p>
Full article ">Figure 2
<p>Overall process.</p>
Full article ">Figure 3
<p>Hyperspectral image processing procedure.</p>
Full article ">Figure 4
<p>Feature extraction.</p>
Full article ">Figure 5
<p>Schematic depicting the functioning of ResNeSt_E.</p>
Full article ">Figure 6
<p>Training process of ResNeSt_E on combined dataset: (<b>a</b>) training set loss function, (<b>b</b>) validation set loss function, (<b>c</b>) training set accuracy, (<b>d</b>) assessment indicators for the classification of validation set.</p>
Full article ">Figure 7
<p>Confusion matrix of ResNeSt_E on the test set: (<b>a</b>) FA + KPCA, (<b>b</b>) KPCA + FA, (<b>c</b>) PCA + FA.</p>
Full article ">
20 pages, 9892 KiB  
Article
Estimation of Maize Water Requirements Based on the Low-Cost Image Acquisition Methods and the Meteorological Parameters
by Jiuxiao Zhao, Jianping Tao, Shirui Zhang, Jingjing Li, Teng Li, Feifei Shan and Wengang Zheng
Agronomy 2024, 14(10), 2325; https://doi.org/10.3390/agronomy14102325 - 10 Oct 2024
Abstract
This study aims to enhance maize water demand calculation. We calculate crop evapotranspiration (ETc) through mobile phone photography and meteorological parameters. In terms of crop coefficient (Kc) calculation, we utilize the mobile phone camera image driver to establish a real-time monitoring model of [...] Read more.
This study aims to enhance maize water demand calculation. We calculate crop evapotranspiration (ETc) through mobile phone photography and meteorological parameters. In terms of crop coefficient (Kc) calculation, we utilize the mobile phone camera image driver to establish a real-time monitoring model of Kc based on plant canopy coverage (PGC) changes. The calculation of PGC is achieved by constructing a PGC classification network and a Convolutional Block Attention Module (CBAM)-U2Net is implemented by the segment network. For the reference crop evapotranspiration (ETo) calculation, we constructed a simplified ETo estimation model based on SVR, LSTM, Optuna LSTM, and GWO-SVM using a public meteorological data-driven program, and evaluated its performance. The results demonstrate that our method achieves high classification accuracy for the PGC 98.9% and segmentation accuracy for the CBAM-U2net-based segmentation network 95.68%. The Kc calculation model exhibits a root mean square error (RMSE) of 0.053. In terms of ETo estimation, the Optuna-LSTM model with four variables demonstrates the best estimation effect, with a correlation coefficient (R2) of 0.953. The final R2 between the estimated ETc value and the true value is 0.918, with an RMSE of 0.014. This method can effectively estimate the water demand of maize. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Test site selection.</p>
Full article ">Figure 2
<p>Overhead shots of maize varieties at different growth stages.</p>
Full article ">Figure 3
<p>Weighing lysimeter data acquisition system.</p>
Full article ">Figure 4
<p>Model framework diagram.</p>
Full article ">Figure 5
<p>PGC-Classification framework diagram.</p>
Full article ">Figure 6
<p>CBAM—U<sup>2</sup>net framework diagram.</p>
Full article ">Figure 7
<p>ETo estimation model diagram.</p>
Full article ">Figure 8
<p>Accuracy rate and loss rate chart before and after transfer learning.</p>
Full article ">Figure 8 Cont.
<p>Accuracy rate and loss rate chart before and after transfer learning.</p>
Full article ">Figure 9
<p>Classification network heat map.</p>
Full article ">Figure 10
<p>Comparison of the effects of different segmentation networks (<b>a</b>) input; (<b>b</b>) CBAM—U<sup>2</sup>net; (<b>c</b>) U<sup>2</sup>net; (<b>d</b>) Deeplabv3; (<b>e</b>) and FCN.</p>
Full article ">Figure 11
<p>PGC changes in three maize varieties at different stages (<b>a</b>) after emergence stage (<b>b</b>) after 4th leaf (<b>c</b>) after 5th leaf (<b>d</b>) after 6th leaf.</p>
Full article ">Figure 12
<p>Correlation analysis between meteorological parameters and ETo true values. (<b>a</b>) Air temperature, (<b>b</b>) air humidity, (<b>c</b>) rainfall, (<b>d</b>) maximum wind speed, (<b>e</b>) minimum wind speed, (<b>f</b>) average wind speed, (<b>g</b>) wind direction, (<b>h</b>) and Rad (<b>i</b>) Atmospheric pressure.</p>
Full article ">Figure 12 Cont.
<p>Correlation analysis between meteorological parameters and ETo true values. (<b>a</b>) Air temperature, (<b>b</b>) air humidity, (<b>c</b>) rainfall, (<b>d</b>) maximum wind speed, (<b>e</b>) minimum wind speed, (<b>f</b>) average wind speed, (<b>g</b>) wind direction, (<b>h</b>) and Rad (<b>i</b>) Atmospheric pressure.</p>
Full article ">Figure 13
<p>ETo estimation under different meteorological parameter inputs.</p>
Full article ">Figure 14
<p>Comparison chart between real and estimated values of ETc.</p>
Full article ">
11 pages, 321 KiB  
Article
Adaptability and Stability of Irrigated Barley Genotypes in the Cerrado of the Federal District
by Rodolfo Thomé, Renato Amabile, Juaci Malaquias, Nara Souza, Gustavo Santos, João Melo, Arlini Fialho and Mariana Santos
Agriculture 2024, 14(10), 1776; https://doi.org/10.3390/agriculture14101776 - 9 Oct 2024
Abstract
Barley (Hordeum vulgare L.) is a significant cereal globally, widely used in human and animal food. Furthermore, it has a strong influence on genotype-by-environment interactions, being considered a highly adaptable crop. This study aimed to estimate the parameters of adaptability and stability [...] Read more.
Barley (Hordeum vulgare L.) is a significant cereal globally, widely used in human and animal food. Furthermore, it has a strong influence on genotype-by-environment interactions, being considered a highly adaptable crop. This study aimed to estimate the parameters of adaptability and stability for 17 barley genotypes, compared with two controls (BRS 180 and BRS 195) grown under irrigation in the Cerrado. The experiments were conducted from 2017 to 2020, from May to September, in two different experimental areas of Embrapa in the Federal District, Brazil. Five traits were evaluated: 1. Esti mated grain yield (kg ha−1); 2. CL1—commercial classification of first grains (>2.5 mm) (%); 3. TGW—1000-grain weight (g); 4. plant height (cm); 5. cycle—days after emergence to earing (days). The data obtained were analyzed for normality and homogeneity of variance, subjected to individual and joint analysis of variance, with means compared by Tukey’s test at 5% significance and the adaptability and stability parameters estimated for the genotypes. The coefficients of environmental variation (CV%) were generally low, indicating good experimental precision. The PFC 2006053 and PFC 2006054 genotypes have broad adaptability and high stability for most traits and outperformed the controls and the overall experiment average. Full article
(This article belongs to the Section Crop Genetics, Genomics and Breeding)
26 pages, 2867 KiB  
Review
A Review of the Application of Hyperspectral Imaging Technology in Agricultural Crop Economics
by Jinxing Wu, Yi Zhang, Pengfei Hu and Yanying Wu
Coatings 2024, 14(10), 1285; https://doi.org/10.3390/coatings14101285 - 9 Oct 2024
Abstract
China is a large agricultural country, and the crop economy holds an important place in the national economy. The identification of crop diseases and pests, as well as the non-destructive classification of crops, has always been a challenge in agricultural development, hindering the [...] Read more.
China is a large agricultural country, and the crop economy holds an important place in the national economy. The identification of crop diseases and pests, as well as the non-destructive classification of crops, has always been a challenge in agricultural development, hindering the rapid growth of the agricultural economy. Hyperspectral imaging technology combines imaging and spectral techniques, using hyperspectral cameras to acquire raw image data of crops. After correcting and preprocessing the raw image data to obtain the required spectral features, it becomes possible to achieve the rapid non-destructive detection of crop diseases and pests, as well as the non-destructive classification and identification of agricultural products. This paper first provides an overview of the current applications of hyperspectral imaging technology in crops both domestically and internationally. It then summarizes the methods of hyperspectral data acquisition and application scenarios. Subsequently, it organizes the processing of hyperspectral data for crop disease and pest detection and classification, deriving relevant preprocessing and analysis methods for hyperspectral data. Finally, it conducts a detailed analysis of classic cases using hyperspectral imaging technology for detecting crop diseases and pests and non-destructive classification, while also analyzing and summarizing the future development trends of hyperspectral imaging technology in agricultural production. The non-destructive rapid detection and classification technology of hyperspectral imaging can effectively select qualified crops and classify crops of different qualities, ensuring the quality of agricultural products. In conclusion, hyperspectral imaging technology can effectively serve the agricultural economy, making agricultural production more intelligent and holding significant importance for the development of agriculture in China. Full article
(This article belongs to the Special Issue Machine Learning-Driven Advancements in Coatings)
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the indoor acquisition system.</p>
Full article ">Figure 2
<p>Flowchart of data processing.</p>
Full article ">Figure 3
<p>The effect of S–G smoothing treatment.</p>
Full article ">Figure 4
<p>First-order and second-order effects.</p>
Full article ">Figure 5
<p>Effect of MSC calibration.</p>
Full article ">Figure 6
<p>Correction diagram of SNV effect.</p>
Full article ">Figure 7
<p>Flowchart of the main stages of the genetic algorithm.</p>
Full article ">Figure 8
<p>Random forest example diagram.</p>
Full article ">Figure 9
<p>Example of a residual block.</p>
Full article ">
38 pages, 2357 KiB  
Review
Experimental Designs and Statistical Analyses for Rootstock Trials
by Richard P. Marini
Agronomy 2024, 14(10), 2312; https://doi.org/10.3390/agronomy14102312 - 8 Oct 2024
Abstract
Modern agricultural research, including fruit tree rootstock evaluations, began in England. In the mid-1800s, field plots were established at the Rothamsted Research Station to evaluate cultivars and fertilizer treatments for annual crops. By the early 1900s, farmers questioned the value of field experimentation [...] Read more.
Modern agricultural research, including fruit tree rootstock evaluations, began in England. In the mid-1800s, field plots were established at the Rothamsted Research Station to evaluate cultivars and fertilizer treatments for annual crops. By the early 1900s, farmers questioned the value of field experimentation because the results were not always valid due to inadequate randomization and replication and poor data summarization. During the first half of the 20th century, Rothamsted statisticians transformed field plot experimentation. Field trials were tremendously improved by incorporating new experimental concepts, such as randomization rather than systematically arranging treatments, the factorial arrangement of treatments to simultaneously test multiple hypotheses, and consideration of experimental error. Following the classification of clonal apple rootstocks at the East Malling Research Station in the 1920s, the first rootstock trials were established to compare rootstocks and evaluate rootstock performance on different soil types and with different scion cultivars. Although most of the statistical methods were developed for annual crops and perennial crops are more variable and difficult to work with, rootstock researchers were early adopters of these concepts because the East Malling staff included both pomologists and statisticians. Many of the new statistical concepts were incorporated into on-farm demonstration plots to promote early farmer adoption of new practices. Recent enhancements in computing power have led to the rapid expansion of statistical theory, the development of new statistical methods, and new statistical programming environments, such as R. Over the past century, in many regions of the world, the adoption of new statistical methods has lagged their development. This review is intended to summarize the adoption of error-controlling experimental designs by rootstock researchers, to describe statistical methods used to summarize the resulting data, and to provide suggestions for designing and analyzing future trials. Full article
(This article belongs to the Special Issue Recent Insights in Physiology of Tree Fruit Production)
Show Figures

Figure 1

Figure 1
<p>Probability of tree survival over 8 years as influenced by five cherry rootstocks in Michigan. Data were from the NC-140 cherry rootstock trial with five rootstocks in a randomized complete block design [<a href="#B165-agronomy-14-02312" class="html-bibr">165</a>]. Data were analyzed with logistic regression, where tree survival was the binomial response and block was a random effect.</p>
Full article ">Figure 2
<p>Four scenarios for blocking in a randomized complete block design with four apple rootstocks randomized in four blocks represented by different colors.</p>
Full article ">Figure 3
<p>Minimum detectable difference at the 5% level of significance for TCA, cumulative yield per tree, and cumulative yield efficiency for comparing means of two apple rootstocks in a randomized complete block design when varying numbers of blocks and trees per block are used at Michigan (<b>left column</b>) and Ontario (<b>right column</b>).</p>
Full article ">Figure 4
<p>GGE biplot showing the cumulative yield for the ‘Gala’ apple on 18 rootstocks at 25 locations. Rootstock names are in blue font and location names are in red font. The asterisks and plus signs indicate the position in the figure for the rootstocks and locations.</p>
Full article ">Figure 5
<p>The which-won-where polygon view of the GGE biplot shows which rootstocks had the highest cumulative yield per tree at which locations. Rootstock and location names for the polygon. Location names from the GGE plot, similar to <a href="#agronomy-14-02312-f004" class="html-fig">Figure 4</a>, in red font. The asterisks and plus signs indicate the position in the figure for the rootstocks and locations.</p>
Full article ">
15 pages, 2101 KiB  
Article
An IoT-Enabled Real-Time Crop Prediction System Using Soil Fertility Analysis
by Manju G, Syam Kishor K S and Binson V A
Eng 2024, 5(4), 2496-2510; https://doi.org/10.3390/eng5040130 - 8 Oct 2024
Abstract
Changes in soil fertility have led to a decline in crop production, making it challenging for farmers to select the best crops based on soil conditions. Accurate crop prediction can significantly enhance crop productivity, and machine learning plays a crucial role in this [...] Read more.
Changes in soil fertility have led to a decline in crop production, making it challenging for farmers to select the best crops based on soil conditions. Accurate crop prediction can significantly enhance crop productivity, and machine learning plays a crucial role in this process. Crop forecasting is influenced by soil, geographic, and environmental characteristics, with feature selection being essential for identifying suitable crops. In this study, we developed a real-time soil fertility analyzer to obtain the real-time values of soil parameters such as potassium, phosphorus, nitrogen content, temperature, pH, moisture content, and electrical conductivity. The crops examined were coconut, ginger, plantain, and tapioca. The data collected from this analysis served as the dataset for different training and testing classification algorithms for crop prediction using 100 soil samples. Among the algorithms tested, the k-nearest neighbors (KNN) algorithm demonstrated the highest performance, with an accuracy of 84%, precision of 85%, recall of 88.8%, and specificity of 92.4%. These results indicate that machine learning, combined with real-time soil analysis, can effectively predict suitable crops, enhancing crop productivity and aiding farmers in making informed decisions. This approach can revolutionize traditional farming practices by providing precise, data-driven insights into crop selection, ultimately improving agricultural efficiency and sustainability. Full article
(This article belongs to the Section Electrical and Electronic Engineering)
Show Figures

Figure 1

Figure 1
<p>JXCT 7-in-1 soil sensor.</p>
Full article ">Figure 2
<p>Sensor interfacing with Arduino UNO.</p>
Full article ">Figure 3
<p>(<b>a</b>) Setup for monitoring soil data; and (<b>b</b>) reading soil data from sensors.</p>
Full article ">Figure 4
<p>Graphical representation of the accuracy of the three models.</p>
Full article ">Figure 5
<p>Confusion matrix—KNN.</p>
Full article ">Figure 6
<p>ROC curve—KNN.</p>
Full article ">
14 pages, 3042 KiB  
Article
Research on the Method of Imperfect Wheat Grain Recognition Utilizing Hyperspectral Imaging Technology
by Hongtao Zhang, Li Zheng, Lian Tan, Jiapeng Yang and Jiahui Gao
Sensors 2024, 24(19), 6474; https://doi.org/10.3390/s24196474 - 8 Oct 2024
Abstract
As the primary grain crop in China, wheat holds a significant position in the country’s agricultural production, circulation, consumption, and various other aspects. However, the presence of imperfect grains has greatly impacted wheat quality and, subsequently, food security. In order to detect perfect [...] Read more.
As the primary grain crop in China, wheat holds a significant position in the country’s agricultural production, circulation, consumption, and various other aspects. However, the presence of imperfect grains has greatly impacted wheat quality and, subsequently, food security. In order to detect perfect wheat grains and six types of imperfect grains, a method for the fast and non-destructive identification of imperfect wheat grains using hyperspectral images was proposed. The main contents and results are as follows: (1) We collected wheat grain hyperspectral data. Seven types of wheat grain samples, each containing 300 grains, were prepared to construct a hyperspectral imaging system for imperfect wheat grains, and visible near-infrared hyperspectral data from 2100 wheat grains were collected. The Savitzky–Golay algorithm was used to analyze the hyperspectral images of wheat grains, selecting 261 dimensional effective hyperspectral datapoints within the range of 420.61–980.43 nm. (2) The Successive Projections Algorithm was used to reduce the dimensions of the 261 dimensional hyperspectral datapoints, selecting 33 dimensional hyperspectral datapoints. Principal Component Analysis was used to extract the optimal spectral wavelengths, specifically selecting hyperspectral images at 647.57 nm, 591.78 nm, and 568.36 nm to establish the dataset. (3) Particle Swarm Optimization was used to optimize the Support Vector Machines model, Convolutional Neural Network model, and MobileNet V2 model, which were established to recognize seven types of wheat grains. The comprehensive recognition rates were 93.71%, 95.14%, and 97.71%, respectively. The results indicate that a larger model with more parameters may not necessarily yield better performance. The research shows that the MobileNet V2 network model exhibits superior recognition efficiency, and the integration of hyperspectral image technology with the classification model can accurately identify imperfect wheat grains. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Figure 1
<p>Hyperspectral images of seven types of wheat grains. (<b>a</b>) Broken wheat; (<b>b</b>) germinated wheat; (<b>c</b>) moldy wheat; (<b>d</b>) insect-eaten wheat; (<b>e</b>) black embryo wheat; (<b>f</b>) gibberella wheat.</p>
Full article ">Figure 2
<p>Visible hyperspectral imaging system. (1) Closed box; (2) light source; (3) displacement stage controller; (4) computer; (5) visible light camera; (6) imaging spectrometer; (7) lens; (8) glass fiber linear lamp; (9) displacement stage; (10) wheat grain.</p>
Full article ">Figure 3
<p>Arrangement of wheat grains.</p>
Full article ">Figure 4
<p>Flow chart of hyperspectral data processing.</p>
Full article ">Figure 5
<p>Hyperspectral image at 736.59 nm. (<b>a</b>) Hyperspectral image; (<b>b</b>) binary image.</p>
Full article ">Figure 6
<p>Hyperspectral images of seven types of wheat grains. (<b>a</b>) Broken wheat; (<b>b</b>) germinated wheat; (<b>c</b>) moldy wheat; (<b>d</b>) insect-eaten wheat; (<b>e</b>) black embryo wheat; (<b>f</b>) gibberella wheat; (<b>g</b>) normal wheat.</p>
Full article ">Figure 7
<p>Average spectral reflectance curves of seven types of wheat grains.</p>
Full article ">Figure 8
<p>Derivation results of hyperspectral data. (<b>a</b>) Savitzky–Golay first derivative; (<b>b</b>) Savitzky–Golay second derivative.</p>
Full article ">Figure 9
<p>Feature selection results of SPA.</p>
Full article ">Figure 10
<p>Classification results of spectral features optimized by SPA.</p>
Full article ">Figure 11
<p>CNN and MobileNet V2 model recognition results. (<b>a</b>) Training accuracy and training loss; (<b>b</b>) Verification accuracy and verification loss.</p>
Full article ">
19 pages, 11653 KiB  
Article
Influence of Vegetation Phenology on the Temporal Effect of Crop Fractional Vegetation Cover Derived from Moderate-Resolution Imaging Spectroradiometer Nadir Bidirectional Reflectance Distribution Function–Adjusted Reflectance
by Yinghao Lin, Tingshun Fan, Dong Wang, Kun Cai, Yang Liu, Yuye Wang, Tao Yu and Nianxu Xu
Agriculture 2024, 14(10), 1759; https://doi.org/10.3390/agriculture14101759 - 5 Oct 2024
Abstract
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may [...] Read more.
Moderate-Resolution Imaging Spectroradiometer (MODIS) Nadir Bidirectional Reflectance Distribution Function (BRDF)-Adjusted Reflectance (NBAR) products are being increasingly used for the quantitative remote sensing of vegetation. However, the assumption underlying the MODIS NBAR product’s inversion model—that surface anisotropy remains unchanged over the 16-day retrieval period—may be unreliable, especially since the canopy structure of vegetation undergoes stark changes at the start of season (SOS) and the end of season (EOS). Therefore, to investigate the MODIS NBAR product’s temporal effect on the quantitative remote sensing of crops at different stages of the growing seasons, this study selected typical phenological parameters, namely SOS, EOS, and the intervening stable growth of season (SGOS). The PROBA-V bioGEOphysical product Version 3 (GEOV3) Fractional Vegetation Cover (FVC) served as verification data, and the Pearson correlation coefficient (PCC) was used to compare and analyze the retrieval accuracy of FVC derived from the MODIS NBAR product and MODIS Surface Reflectance product. The Anisotropic Flat Index (AFX) was further employed to explore the influence of vegetation type and mixed pixel distribution characteristics on the BRDF shape under different stages of the growing seasons and different FVC; that was then combined with an NDVI spatial distribution map to assess the feasibility of using the reflectance of other characteristic directions besides NBAR for FVC correction. The results revealed the following: (1) Generally, at the SOSs and EOSs, the differences in PCCs before vs. after the NBAR correction mainly ranged from 0 to 0.1. This implies that the accuracy of FVC derived from MODIS NBAR is lower than that derived from MODIS Surface Reflectance. Conversely, during the SGOSs, the differences in PCCs before vs. after the NBAR correction ranged between –0.2 and 0, suggesting the accuracy of FVC derived from MODIS NBAR surpasses that derived from MODIS Surface Reflectance. (2) As vegetation phenology shifts, the ensuing differences in NDVI patterning and AFX can offer auxiliary information for enhanced vegetation classification and interpretation of mixed pixel distribution characteristics, which, when combined with NDVI at characteristic directional reflectance, could enable the accurate retrieval of FVC. Our results provide data support for the BRDF correction timescale effect of various stages of the growing seasons, highlighting the potential importance of considering how they differentially influence the temporal effect of NBAR corrections prior to monitoring vegetation when using the MODIS NBAR product. Full article
Show Figures

Figure 1

Figure 1
<p>Spatial extent of the Wancheng District study area (in Henan Province, China). (<b>a</b>) Map of land cover types showing the location of sampling points across the study area. This map came from MCD12Q1 (v061). (<b>b</b>–<b>d</b>) True-color images of the three mixed pixels, obtained from Sentinel-2. The distribution characteristics are as follows: crops above with buildings below (<b>b</b>); crops below with buildings above (<b>c</b>); and buildings in the upper-left corner, crops in the remainder (<b>d</b>).</p>
Full article ">Figure 2
<p>Monthly average temperature and monthly total precipitation in the study area, from 2017 to 2021.</p>
Full article ">Figure 3
<p>Data processing flow chart. The green rectangles from top to the bottom represent three steps: crop phenological parameters extraction with TIMESAT; Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4; and accuracy evaluation, respectively. Blue solid rectangles refer to a used product or derived results, while blue dashed rectangles refer to the software or model used in this study. NDVI<sub>MOD09GA</sub>: NDVI derived from MOD09GA, NDVI<sub>MCD43A4</sub>: NDVI derived from MCD43A4, FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: Pearson correlation coefficient (PCC) calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC.</p>
Full article ">Figure 4
<p>NDVI and EVI time series fitted curves and phenological parameters of crops. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 5
<p>Spatial distribution of Fractional Vegetation Cover (FVC) derived from MOD09GA and MCD43A4, and the difference images of FVC. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA, FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. (<b>a</b>–<b>c</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 15 November 2020, respectively; (<b>d</b>–<b>f</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 10 February 2021, respectively; (<b>g</b>–<b>i</b>) FVC derived from MOD09GA, MCD43A4, and the difference between FVC<sub>MOD09GA</sub> and FVC<sub>MCD43A4</sub> on 30 September 2021, respectively.</p>
Full article ">Figure 6
<p>Pearson correlation coefficients (PCCs) of Fractional Vegetation Cover (FVC) derived before and after the NBAR correction with GEOV3 FVC at different stages of the growing seasons. FVC<sub>MOD09GA</sub>: FVC derived from MOD09GA. FVC<sub>MCD43A4</sub>: FVC derived from MCD43A4. PCC<sub>MOD09GA</sub>: PCC calculated for FVC<sub>MOD09GA</sub> and GEOV3 FVC, PCC<sub>MCD43A4</sub>: PCC calculated for FVC<sub>MCD43A4</sub> and GEOV3 FVC. (<b>a</b>) PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub> in 2018–2021; (<b>b</b>) Scatterplot of numerical differences between PCC<sub>MOD09GA</sub> and PCC<sub>MCD43A4</sub>. SOS: start of season; EOS: end of season; SGOS: stable growth of season.</p>
Full article ">Figure 7
<p>NDVI spatial distribution maps of crop pixel, savanna pixel, and grassland pixel in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crop. (<b>e</b>–<b>h</b>) Savanna. (<b>i</b>–<b>l</b>) Grassland. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">Figure 8
<p>NDVI spatial distribution maps of mixed pixels in different stages of the growing seasons. (<b>a</b>–<b>d</b>) Crops above and buildings below. (<b>e</b>–<b>h</b>) Crops below and buildings above. (<b>i</b>–<b>l</b>) Buildings in the upper-left corner and crops in the remainder. SZA: Solar Zenith Angle, FVC: Fractional Vegetation Cover, AFX_RED: Anisotropic Flat Index (AFX) in the red band, AFX_NIR: AFX in the near-infrared band.</p>
Full article ">
29 pages, 7138 KiB  
Article
The Landscape Ecological Quality of Two Different Farm Management Models: Polyculture Agroforestry vs. Conventional
by Gemma Chiaffarelli, Nicolò Sgalippa and Ilda Vagge
Land 2024, 13(10), 1598; https://doi.org/10.3390/land13101598 - 30 Sep 2024
Abstract
Low-intensity, diversified agricultural land use is needed to counteract the current decline in agrobiodiversity. Landscape ecology tools can support agrobiodiversity assessment efforts by investigating biodiversity-related ecological functions (pattern–process paradigm). In this study, we test a toolkit of landscape ecology analyses to compare different [...] Read more.
Low-intensity, diversified agricultural land use is needed to counteract the current decline in agrobiodiversity. Landscape ecology tools can support agrobiodiversity assessment efforts by investigating biodiversity-related ecological functions (pattern–process paradigm). In this study, we test a toolkit of landscape ecology analyses to compare different farm management models: polyculture agroforestry (POLY) vs. conventional monoculture crop management (CV). Farm-scale analyses are applied on temperate alluvial sites (Po Plain, Northern Italy), as part of a broader multi-scale analytical approach. We analyze the landscape ecological quality through landscape matrix composition, patch shape complexity, diversity, metastability, and connectivity indices. We assess farm differences through multivariate analyses and t-tests and test a farm classification tool, namely, a scoring system based on the relative contributions of POLY farms, considering their deviation from a local CV baseline. The results showed a separate ecological behavior of the two models. The POLY model showed better performance, with significant positive contributions to the forest and semi-natural component equipment and diversity; agricultural component diversity, metastability; total farm diversity, metastability, connectivity, and circuitry. A reference matrix for the ecological interpretation of the results is provided. Farm classification provides a quick synthesis of such contributions, facilitating farm comparisons. The methodology has a low cost and quickly provides information on ongoing ecological processes resulting from specific farm management practices; it is intended to complement field-scale assessments and could help to meet the need for a partially outcome-based assessment of good farm practice. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) The 4 sites’ locations in Western Po Plain (from left to right: C, D, G, P); (<b>B</b>) the main local-scale land use traits of each local-scale landscape system; (<b>C</b>) POLY farms (red line) and CV farms’ (orange line) boundaries among each local site.</p>
Full article ">Figure 2
<p>Flow chart synthesizing the applied multi-scale methodology. In this work, we present the results of farm-scale analyses. References for the analyses that are not reported in the present article are reported as follows: <b>*</b> = [<a href="#B29-land-13-01598" class="html-bibr">29</a>]; <b>**</b> = [<a href="#B31-land-13-01598" class="html-bibr">31</a>].</p>
Full article ">Figure 3
<p>(<b>A</b>) Spearman’s rs correlation coefficients values used for a first screening on the landscape ecology indices applied at farm scale: structural traits, shape complexity, diversity, and metastability indices (both TOT, FSN, and AGR subsystems); connectivity and circuitry indices (only TOT farm system). (<b>B</b>) Correlation analysis on the separated TOT, FSN, and AGR subsystems used for subsequent indices screening for multivariate analysis (TOT: Spearman’s rs correlation coefficients; FSN and AGR: linear r Pearson correlation coefficients; <span class="html-italic">p</span> &gt; 0.05 crossed). See <a href="#land-13-01598-t0A1" class="html-table">Table A1</a> for details on the applied indices.</p>
Full article ">Figure 4
<p>Two-way hierarchical clustering results from the TOT farm system’s dataset (left side, non-normally distributed data, clustering based on Gower distance); FSN and AGR subsystems’ datasets (middle and right side, normally distributed data, clustering based on Euclidean distance). The involved indices are shown on the top-right of each matrix; in green: POLY farms; in orange: CV farms.</p>
Full article ">Figure 5
<p>Ordination multivariate analysis results: PCoA on TOT farm systems’ data set (left side, non-normally distributed data); PCA on FSN and AGR subsystems’ datasets (middle and right side, normally distributed data); in green: POLY farms; in orange: CV farms. On the bottom side is reported the PCoA coordinates’ correlation coefficients with landscape ecology indices for TOT data (Spearman’s rs; <span class="html-italic">p</span> &lt; 0.05 crossed); landscape ecology indices’ loading plots on PC1-2-3 for FSN and AGR data.</p>
Full article ">Figure 6
<p>Comparison between the landscape ecology indices applied to the total farm system (TOT), the forest and semi-natural farm subsystem (FSN), and the agricultural subsystem (AGR) on POLY and CV farms (mean between 5 CV farms of each case study). Dark grey boxes highlight significantly differing indices: * = <span class="html-italic">p</span> &lt; 0.05; ** = <span class="html-italic">p</span> &lt; 0.01; *** = <span class="html-italic">p</span> &lt; 0.001; if gray *: only one of Montecarlo permutation non-parametric test or Welch test for unequal variance shows significant differences. See <a href="#land-13-01598-t0A1" class="html-table">Table A1</a> for details on the applied indices.</p>
Full article ">Figure 7
<p>For the TOT farm system and the FSN and AGR subsystems are shown: (<b>A</b>) distribution of the differences between each POLY FARM and each corresponding CV farm (5 CV farms for each POLY farm); (<b>B</b>) classification of each of the 4 POLY farms under study into the 4 classes derived from quartile values, based on the difference between the POLY farm and the corresponding CV_MEAN; (<b>C</b>) classification of POLY farms based on the sum of the quartiles classes values for each POLY farm, for the TOT, FSN, and AGR subsystems, and for their sum (SUM_ALL). See <a href="#app2-land-13-01598" class="html-app">Appendix A</a> for details on the applied indices.</p>
Full article ">Figure 8
<p>Reference matrix for the ecological interpretation (rows) of lower (↓) or higher (↑) values of the applied landscape ecology indices (columns) which showed relevant differences between the two farm management models (POLY and CV). Interpretations are referred to agricultural landscape peculiarities and are focused on biodiversity support functions. Grey boxes represent the POLY farms’ case histories, according to our study results. The light grey ones represent the detection of higher/lower mean values in POLY farms, compared to their local CV baseline; the dark grey ones represent the detection of significantly higher/lower values in POLY farms (<span class="html-italic">t</span>-test results). References for each index: NP_% [<a href="#B4-land-13-01598" class="html-bibr">4</a>,<a href="#B21-land-13-01598" class="html-bibr">21</a>,<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B35-land-13-01598" class="html-bibr">35</a>,<a href="#B44-land-13-01598" class="html-bibr">44</a>,<a href="#B45-land-13-01598" class="html-bibr">45</a>,<a href="#B51-land-13-01598" class="html-bibr">51</a>,<a href="#B52-land-13-01598" class="html-bibr">52</a>,<a href="#B56-land-13-01598" class="html-bibr">56</a>,<a href="#B57-land-13-01598" class="html-bibr">57</a>,<a href="#B58-land-13-01598" class="html-bibr">58</a>,<a href="#B59-land-13-01598" class="html-bibr">59</a>,<a href="#B61-land-13-01598" class="html-bibr">61</a>,<a href="#B84-land-13-01598" class="html-bibr">84</a>]; MPS [<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B28-land-13-01598" class="html-bibr">28</a>,<a href="#B47-land-13-01598" class="html-bibr">47</a>,<a href="#B52-land-13-01598" class="html-bibr">52</a>,<a href="#B57-land-13-01598" class="html-bibr">57</a>]; FSN [<a href="#B4-land-13-01598" class="html-bibr">4</a>,<a href="#B21-land-13-01598" class="html-bibr">21</a>,<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B44-land-13-01598" class="html-bibr">44</a>,<a href="#B46-land-13-01598" class="html-bibr">46</a>,<a href="#B51-land-13-01598" class="html-bibr">51</a>,<a href="#B52-land-13-01598" class="html-bibr">52</a>,<a href="#B53-land-13-01598" class="html-bibr">53</a>,<a href="#B56-land-13-01598" class="html-bibr">56</a>,<a href="#B84-land-13-01598" class="html-bibr">84</a>]; AGR [<a href="#B4-land-13-01598" class="html-bibr">4</a>,<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B44-land-13-01598" class="html-bibr">44</a>,<a href="#B51-land-13-01598" class="html-bibr">51</a>,<a href="#B52-land-13-01598" class="html-bibr">52</a>,<a href="#B56-land-13-01598" class="html-bibr">56</a>,<a href="#B84-land-13-01598" class="html-bibr">84</a>]; DIV1A, DIV1B, DOM1 [<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B33-land-13-01598" class="html-bibr">33</a>,<a href="#B39-land-13-01598" class="html-bibr">39</a>,<a href="#B43-land-13-01598" class="html-bibr">43</a>,<a href="#B53-land-13-01598" class="html-bibr">53</a>,<a href="#B54-land-13-01598" class="html-bibr">54</a>,<a href="#B84-land-13-01598" class="html-bibr">84</a>,<a href="#B85-land-13-01598" class="html-bibr">85</a>,<a href="#B86-land-13-01598" class="html-bibr">86</a>]; MBTC [<a href="#B48-land-13-01598" class="html-bibr">48</a>,<a href="#B49-land-13-01598" class="html-bibr">49</a>,<a href="#B50-land-13-01598" class="html-bibr">50</a>]; CON, WCON, CIR, WCIR, L/N, WL/N [<a href="#B25-land-13-01598" class="html-bibr">25</a>,<a href="#B35-land-13-01598" class="html-bibr">35</a>,<a href="#B39-land-13-01598" class="html-bibr">39</a>,<a href="#B40-land-13-01598" class="html-bibr">40</a>,<a href="#B41-land-13-01598" class="html-bibr">41</a>,<a href="#B42-land-13-01598" class="html-bibr">42</a>,<a href="#B50-land-13-01598" class="html-bibr">50</a>,<a href="#B55-land-13-01598" class="html-bibr">55</a>,<a href="#B62-land-13-01598" class="html-bibr">62</a>,<a href="#B65-land-13-01598" class="html-bibr">65</a>,<a href="#B87-land-13-01598" class="html-bibr">87</a>,<a href="#B88-land-13-01598" class="html-bibr">88</a>]; EQC_1_2, EQC_4_5 [<a href="#B35-land-13-01598" class="html-bibr">35</a>,<a href="#B50-land-13-01598" class="html-bibr">50</a>,<a href="#B62-land-13-01598" class="html-bibr">62</a>,<a href="#B65-land-13-01598" class="html-bibr">65</a>,<a href="#B87-land-13-01598" class="html-bibr">87</a>,<a href="#B88-land-13-01598" class="html-bibr">88</a>].</p>
Full article ">Figure A1
<p>Example on the spatial representation of indices for D (<b>upper side</b>) and G (<b>bottom side</b>) sites, the ones which showed best performances. For each site, diversity (DIV1A), mean biological territorial capacity (MBTC), and connectivity graphs maps showing links and nodes are reported for the POLY farm (D and G) and an example on one local CV farm (D-CV4; C-CV4).</p>
Full article ">
25 pages, 5094 KiB  
Article
Evaluating Flood Damage to Paddy Rice Fields Using PlanetScope and Sentinel-1 Data in North-Western Nigeria: Towards Potential Climate Adaptation Strategies
by Sa’ad Ibrahim and Heiko Balzter
Remote Sens. 2024, 16(19), 3657; https://doi.org/10.3390/rs16193657 - 30 Sep 2024
Abstract
Floods are significant global disasters, but their impact in developing countries is greater due to the lower shock tolerance, many subsistence farmers, land fragmentation, poor adaptation strategies, and low technical capacity, which worsen food security and livelihoods. Therefore, accurate and timely monitoring of [...] Read more.
Floods are significant global disasters, but their impact in developing countries is greater due to the lower shock tolerance, many subsistence farmers, land fragmentation, poor adaptation strategies, and low technical capacity, which worsen food security and livelihoods. Therefore, accurate and timely monitoring of flooded crop areas is crucial for both disaster impact assessments and adaptation strategies. However, most existing methods for monitoring flooded crops using remote sensing focus solely on estimating the flood damage, neglecting the need for adaptation decisions. To address these issues, we have developed an approach to mapping flooded rice fields using Earth observation and machine learning. This approach integrates high-resolution multispectral satellite images with Sentinel-1 data. We have demonstrated the reliability and applicability of this approach by using a manually labelled dataset related to a devastating flood event in north-western Nigeria. Additionally, we have developed a land suitability model to evaluate potential areas for paddy rice cultivation. Our crop extent and land use/land cover classifications achieved an overall accuracy of between 93% and 95%, while our flood mapping achieved an overall accuracy of 99%. Our findings indicate that the flood event caused damage to almost 60% of the paddy rice fields. Based on the land suitability assessment, our results indicate that more land is suitable for cultivation during natural floods than is currently being used. We propose several recommendations as adaptation measures for stakeholders to improve livelihoods and mitigate flood disasters. This study highlights the importance of integrating multispectral and synthetic aperture radar (SAR) data for flood crop mapping using machine learning. Decision-makers will benefit from the flood crop mapping framework developed in this study in a number of spatial planning applications. Full article
Show Figures

Figure 1

Figure 1
<p>Study area location showing digital elevation and showing three sites that were used in the subsequent sections as extracts of the LULC/crop and flood extents from pre- and post-flood imagery, respectively, (<b>a</b>) and study area overlay on Nigeria (<b>b</b>).</p>
Full article ">Figure 2
<p>(<b>a</b>–<b>f</b>) depict the RGB composites for three sites within the study area, along with their corresponding. RF LULC maps from PlanetScope four bands, SAR and NDWI. (<b>a</b>) PlanetScope RGB for site I, (<b>b</b>) PlanetScope RGB for site II, (<b>c</b>) PlanetScope RGB for site III, (<b>d</b>) RF LULC map for site I, (<b>e</b>) RF LULC map for site II, and (<b>f</b>) RF LULC map for site III.</p>
Full article ">Figure 3
<p>Histogram distribution of the flood and non-flood points extracted based on the training data used for the flooded/non-flooded classification using the (<b>a</b>) NDWI (PlanetScope), (<b>b</b>) Sentinel-1 VV backscatter, and (<b>c</b>) Sentinel-1 VH backscatter.</p>
Full article ">Figure 4
<p>(<b>a</b>–<b>i</b>) RF flooded and non-flooded layers overlaid by water bodies, affected and unaffected rice fields, from the subsets of the full scenes (zoom in from the three sites shown on the study area map).</p>
Full article ">Figure 5
<p>Sentinel-1 VV signals of the paddy rice fields of flooded/non-flooded rice fields and their corresponding rainfall anomalies (2016–2023). (<b>a</b>) Flooded/non-flooded rice fields and rainfall anomalies for the three locations shown on the graph. (<b>b</b>) Flooded/non-flooded rice fields and rainfall anomalies for the three locations shown on the graph.</p>
Full article ">Figure 6
<p>Area in hectares and percentages for each LULC/crop extent, flooded/non-flooded area and the damaged PR.</p>
Full article ">Figure 7
<p>Potential cultivable area estimated based on the weighted overlay approach for different scenarios: (<b>a</b>) for rainfed (RF) agriculture, (<b>b</b>) for rainfed under natural flood (RFNF) and (<b>c</b>) for Irrigation (IR) paddy rice farming.</p>
Full article ">
17 pages, 3366 KiB  
Article
Beyond the Basics: Taxonomic Classification and Pathogenomics in Recently Discovered Dickeya dadantii Isolates
by Mateus Sudario Pereira, Diego Lucas Neres Rodrigues, Juan Carlos Ariute, Douglas Vinícius Dias Carneiro, Pedro Alexandre Sodrzeieski, Marco Aurélio Siqueira Gama, Elineide Barbosa de Souza, Vasco Azevedo, Bertram Brenig, Ana Maria Benko-Iseppon and Flavia Figueira Aburjaile
Taxonomy 2024, 4(4), 696-712; https://doi.org/10.3390/taxonomy4040036 - 30 Sep 2024
Abstract
The genus Dickeya consists of Gram-negative bacteria capable of causing soft rot symptoms in plants, which involves tissue breakdown, particularly in storage organs such as tubers, rhizomes, and bulbs. These bacteria are ranked among the top ten most relevant phytopathogens and seriously threaten [...] Read more.
The genus Dickeya consists of Gram-negative bacteria capable of causing soft rot symptoms in plants, which involves tissue breakdown, particularly in storage organs such as tubers, rhizomes, and bulbs. These bacteria are ranked among the top ten most relevant phytopathogens and seriously threaten economically valuable crops and ornamental plants. This study employs a genomic analysis approach to taxonomically classify and characterize the resistome and virulome of two new strains, CCRMP144 and CCRMP250, identified as Dickeya dadantii. These strains were found to be the causative agents of soft rot symptoms in chili pepper (Capsicum spp.) and lettuce (Lactuca sativa), respectively, in the northeastern region of Brazil. The methodology employed in silico techniques, including tetra correlation search (TCS) and Average Nucleotide Identity (ANI) analysis, in association with a phylogenomic tree inference. TCS and ANI analysis showed that the studied strains belong to the Dickeya dadantii species. The phylogenomic analysis grouped the studied strains in the D. dadantii clade. The genomic characterization demonstrates 68 virulence genes, 54 resistances of biocide and heavy metal genes, and 23 antibiotic resistance genes. As far as we know, this is the first genomic study with Brazilian D. dadantii strains. This study demonstrates the efficacy to taxonomic classification and provides insights into the pathogenesis, host range, and adaptability of these strains which are crucial for the development of more effective management and control strategies for soft rot diseases. Full article
Show Figures

Figure 1

Figure 1
<p>Collapsed phylogenomic tree inference using orthologues. Rooted maximum likelihood phylogeny of the 186 <span class="html-italic">Dickeya</span> strains. Monophyletic clades of <span class="html-italic">D. fangzhongdai</span>, <span class="html-italic">D. dianthicola</span>, <span class="html-italic">D. solani</span>, and <span class="html-italic">D. chrysanthemi</span> were collapsed into single species branch. Each species is denoted by a distinct color. The <span class="html-italic">Pectobacterium atrosepticum</span> (Pat_21A) was used as an outgroup. The strains CCRMP144 and CCRMP250 are highlighted in yellow.</p>
Full article ">Figure 2
<p>Bar chart showing the pan-virulome and its associated mechanisms. The <span class="html-italic">x</span>-axis represents the different mechanisms, while the <span class="html-italic">y</span>-axis represents the number of genes. The bars are color coded to distinguish among core, accessory, and exclusive gene categories.</p>
Full article ">Figure 3
<p>Clustermap indicating the presence of virulence genes in <span class="html-italic">D. dadantii</span> compared to the Virulence Factor Database (VFDB) database. The <span class="html-italic">x</span>-axis represents the genes found, while the <span class="html-italic">y</span>-axis represents the <span class="html-italic">D. dadantii</span> strains. The color gradient serves to highlight the identity level.</p>
Full article ">Figure 4
<p>Clustermap indicating the heavy metal and biocide resistance genes between <span class="html-italic">D. dadantii</span> against BacMet database. The <span class="html-italic">x</span>-axis represents the heavy metal and biocide resistance genes, while the <span class="html-italic">y</span>-axis represents the <span class="html-italic">D. dadantii</span> strains. The color gradient serves to highlight the identity level.</p>
Full article ">Figure 5
<p>Bar chart showing heavy metal compounds from a pan-resistome perspective. The <span class="html-italic">x</span>-axis represents the different heavy metals, while the <span class="html-italic">y</span>-axis represents the number of genes found within the genus. The bar colors blue, orange, and green represent core, accessory, and exclusive, respectively.</p>
Full article ">Figure 6
<p>Clustermap illustrating the presence of antibiotic resistance genes in <span class="html-italic">D. dadantii</span> compared to the CARD database. The <span class="html-italic">x</span>-axis represents antibiotic resistance genes, while the <span class="html-italic">y</span>-axis represents the <span class="html-italic">D. dadantii</span> strains. The color gradient serves to highlight the identity level.</p>
Full article ">Figure 7
<p>Bar chart showing a pan-resistome perspective of the drug classes. The <span class="html-italic">x</span>-axis represents the different drug classes, while the <span class="html-italic">y</span>-axis represents the number of genes found within the genus. The bar colors blue, orange, and green represent core, accessory, and exclusive, respectively.</p>
Full article ">
21 pages, 4545 KiB  
Article
SkipResNet: Crop and Weed Recognition Based on the Improved ResNet
by Wenyi Hu, Tian Chen, Chunjie Lan, Shan Liu and Lirong Yin
Land 2024, 13(10), 1585; https://doi.org/10.3390/land13101585 - 29 Sep 2024
Abstract
Weeds have a detrimental effect on crop yield. However, the prevailing chemical weed control methods cause pollution of the ecosystem and land. Therefore, it has become a trend to reduce dependence on herbicides; realize a sustainable, intelligent weed control method; and protect the [...] Read more.
Weeds have a detrimental effect on crop yield. However, the prevailing chemical weed control methods cause pollution of the ecosystem and land. Therefore, it has become a trend to reduce dependence on herbicides; realize a sustainable, intelligent weed control method; and protect the land. In order to realize intelligent weeding, efficient and accurate crop and weed recognition is necessary. Convolutional neural networks (CNNs) are widely applied for weed and crop recognition due to their high speed and efficiency. In this paper, a multi-path input skip-residual network (SkipResNet) was put forward to upgrade the classification function of weeds and crops. It improved the residual block in the ResNet model and combined three different path selection algorithms. Experiments showed that on the plant seedling dataset, our proposed network achieved an accuracy of 95.07%, which is 0.73%, 0.37%, and 4.75% better than that of ResNet18, VGG19, and MobileNetV2, respectively. The validation results on the weed–corn dataset also showed that the algorithm can provide more accurate identification of weeds and crops, thereby reducing land contamination during the weeding process. In addition, the algorithm is generalizable and can be used in image classification in agriculture and other fields. Full article
(This article belongs to the Special Issue GeoAI for Land Use Observations, Analysis and Forecasting)
Show Figures

Figure 1

Figure 1
<p>General description of the methodology for weed classification.</p>
Full article ">Figure 2
<p>Structure of a residual block: x is the data input to layer1, F(x) is the output after the data are computed by layer1 and layer2, and there is a skip connection between x and F(x) such that the output of the residual block becomes x + F(x).</p>
Full article ">Figure 3
<p>Improvement of residual blocks: x is the data input of ayer1 (the output of the layer before layer 1 in the network), x0 are the original input data, and F(x) is the output after the computation of layer1 and layer2. After deriving F(x), x0 is re-inputted so that the output of the residual block is changed to x0 + F(x).</p>
Full article ">Figure 4
<p>The framework of ResNet, SkipResNet, and SkipNet. (<b>a</b>) The 18-layer ResNet, which is equivalent to the 18-layer SkipResNet when k = 1; (<b>b</b>) the 18-layer SkipResNet, which shows the first input in the middle layer of the path (k = 2); (<b>c</b>) the 18-layer SkipResNet, with the figure showing the second input path at the middle layer (k = 3); and (<b>d</b>) evaluation of the 18-layer SkipNet for the CIFAR-10 dataset, with an input image resolution of 32 × 32. Here, k is the input path labeling.</p>
Full article ">Figure 4 Cont.
<p>The framework of ResNet, SkipResNet, and SkipNet. (<b>a</b>) The 18-layer ResNet, which is equivalent to the 18-layer SkipResNet when k = 1; (<b>b</b>) the 18-layer SkipResNet, which shows the first input in the middle layer of the path (k = 2); (<b>c</b>) the 18-layer SkipResNet, with the figure showing the second input path at the middle layer (k = 3); and (<b>d</b>) evaluation of the 18-layer SkipNet for the CIFAR-10 dataset, with an input image resolution of 32 × 32. Here, k is the input path labeling.</p>
Full article ">Figure 5
<p>Example images of the plant seedling dataset [<a href="#B29-land-13-01585" class="html-bibr">29</a>]. The labels in this figure correspond to the labels in <a href="#land-13-01585-t002" class="html-table">Table 2</a>. (<b>a</b>) Black-grass, (<b>b</b>) charlock, (<b>c</b>) cleavers, (<b>d</b>) common chickweed, (<b>e</b>) common wheat, (<b>f</b>) fat hen, (<b>g</b>) loose silky-bent, (<b>h</b>) maize, (<b>i</b>) scentless mayweed, (<b>j</b>) shepherd’s purse, (<b>k</b>) small-flowered cranesbill, and (<b>l</b>) sugar beet.</p>
Full article ">Figure 6
<p>Weed–corn dataset [<a href="#B14-land-13-01585" class="html-bibr">14</a>]: (<b>a</b>) bluegrass, (<b>b</b>) chenopodium album, (<b>c</b>) cirsium setosum, (<b>d</b>) sedge, and (<b>e</b>) corn.</p>
Full article ">Figure 7
<p>CIFAR-10 dataset [<a href="#B30-land-13-01585" class="html-bibr">30</a>].</p>
Full article ">Figure 8
<p>Confusion matrices of SkipResNet18, ResNet18, VGG19, and MobileNetV2 on a test set of 12 plant seedlings: (<b>a</b>) SkipResNet18; (<b>b</b>) ResNet18; (<b>c</b>) VGG19; (<b>d</b>) MobileNetV2. The 12 species considered were (1) black-grass, (2) charlock, (3) cleavers, (4) common chickweed, (5) common wheat, (6) fat hen, (7) loose silky-bent, (8) maize, (9) scentless mayweed, (10) shepherd’s purse, (11) small-flowered cranesbill, and (12) sugar beet.</p>
Full article ">Figure 8 Cont.
<p>Confusion matrices of SkipResNet18, ResNet18, VGG19, and MobileNetV2 on a test set of 12 plant seedlings: (<b>a</b>) SkipResNet18; (<b>b</b>) ResNet18; (<b>c</b>) VGG19; (<b>d</b>) MobileNetV2. The 12 species considered were (1) black-grass, (2) charlock, (3) cleavers, (4) common chickweed, (5) common wheat, (6) fat hen, (7) loose silky-bent, (8) maize, (9) scentless mayweed, (10) shepherd’s purse, (11) small-flowered cranesbill, and (12) sugar beet.</p>
Full article ">Figure 9
<p>Precision–recall plots for (<b>a</b>) SkipResNet18, (<b>b</b>) ResNet18, (<b>c</b>) VGG19, and (<b>d</b>) MobileNetV2.</p>
Full article ">Figure 9 Cont.
<p>Precision–recall plots for (<b>a</b>) SkipResNet18, (<b>b</b>) ResNet18, (<b>c</b>) VGG19, and (<b>d</b>) MobileNetV2.</p>
Full article ">Figure 10
<p>Confusion matrices for SkipResNet18 and ResNet18 on four weed and corn test sets. (<b>a</b>) SkipResNet18 and (<b>b</b>) ResNet18. The 5 species considered were (1) bluegrass, (2) chenopodium album, (3) cirsium setosum, (4) corn, and (5) sedge.</p>
Full article ">Figure 11
<p>Accuracy of SkipNet18, ResNet18, VGG19, and ResNet34 models on the CIFAR-10 dataset.</p>
Full article ">
23 pages, 10727 KiB  
Article
Enabling Intelligence on the Edge: Leveraging Edge Impulse to Deploy Multiple Deep Learning Models on Edge Devices for Tomato Leaf Disease Detection
by Dennis Agyemanh Nana Gookyi, Fortunatus Aabangbio Wulnye, Michael Wilson, Paul Danquah, Samuel Akwasi Danso and Awudu Amadu Gariba
AgriEngineering 2024, 6(4), 3563-3585; https://doi.org/10.3390/agriengineering6040203 - 29 Sep 2024
Abstract
Tomato diseases, including Leaf blight, Leaf curl, Septoria leaf spot, and Verticillium wilt, are responsible for up to 50% of annual yield loss, significantly impacting global tomato production, valued at approximately USD 87 billion. In Ghana, there is a yield gap of about [...] Read more.
Tomato diseases, including Leaf blight, Leaf curl, Septoria leaf spot, and Verticillium wilt, are responsible for up to 50% of annual yield loss, significantly impacting global tomato production, valued at approximately USD 87 billion. In Ghana, there is a yield gap of about 50% in tomato production, which requires drastic measures to increase the yield of tomatoes. Conventional diagnostic methods are labor-intensive and impractical for real-time application, highlighting the need for innovative solutions. This study addresses these issues in Ghana by utilizing Edge Impulse to deploy multiple deep-learning models on a single mobile device, facilitating the rapid and precise detection of tomato leaf diseases in the field. This work compiled and rigorously prepared a comprehensive Ghanaian dataset of tomato leaf images, applying advanced preprocessing and augmentation techniques to enhance robustness. Using TensorFlow, we designed and optimized efficient convolutional neural network (CNN) architectures, including MobileNet, Inception, ShuffleNet, Squeezenet, EfficientNet, and a custom Deep Neural Network (DNN). The models were converted to TensorFlow Lite format and quantized to int8, substantially reducing the model size and improving inference speed. Deployment files were generated, and the Edge Impulse platform was configured to enable multiple model deployments on a mobile device. Performance evaluations across edge hardware provided metrics such as inference speed, accuracy, and resource utilization, demonstrating reliable real-time detection. EfficientNet achieved a high training accuracy of 97.12% with a compact 4.60 MB model size, proving its efficacy for mobile device deployment. In contrast, the custom DNN model is optimized for microcontroller unit (MCU) deployment. This edge artificial intelligence (AI) technology integration into agricultural practices offers scalable, cost-effective, and accessible solutions for disease classification, enhancing crop management, and supporting sustainable farming practices. Full article
(This article belongs to the Special Issue Implementation of Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Figure 1
<p>TensorFlow and Edge Impulse integration model.</p>
Full article ">Figure 2
<p>Stages of the proposed method.</p>
Full article ">Figure 3
<p>Samples of the dataset images.</p>
Full article ">Figure 4
<p>Distribution of tomato leaf disease classes.</p>
Full article ">Figure 5
<p>MobileNet architecture.</p>
Full article ">Figure 6
<p>Inception architecture.</p>
Full article ">Figure 7
<p>ShuffleNet architecture.</p>
Full article ">Figure 8
<p>SqueezeNet architecture.</p>
Full article ">Figure 9
<p>EfficientNet architecture.</p>
Full article ">Figure 10
<p>Custom DNN architecture.</p>
Full article ">Figure 11
<p>Edge impulse platform flowchart.</p>
Full article ">Figure 12
<p>MobileNet training accuracy and training loss.</p>
Full article ">Figure 13
<p>Inception training accuracy and training loss.</p>
Full article ">Figure 14
<p>ShuffleNet training accuracy and training loss.</p>
Full article ">Figure 15
<p>SqueezeNet training accuracy and training loss.</p>
Full article ">Figure 16
<p>EfficientNet training accuracy and training loss.</p>
Full article ">Figure 17
<p>Custom DNN training accuracy and training loss.</p>
Full article ">Figure 18
<p>Model deployment to mobile phone using QR code.</p>
Full article ">Figure 19
<p>Tomato leaf disease detection inference on a mobile phone.</p>
Full article ">Figure 20
<p>Use case flow chart.</p>
Full article ">
20 pages, 2515 KiB  
Article
Detection of Thymoma Disease Using mRMR Feature Selection and Transformer Models
by Mehmet Agar, Siyami Aydin, Muharrem Cakmak, Mustafa Koc and Mesut Togacar
Diagnostics 2024, 14(19), 2169; https://doi.org/10.3390/diagnostics14192169 - 29 Sep 2024
Abstract
Background: Thymoma is a tumor that originates in the thymus gland, a part of the human body located behind the breastbone. It is a malignant disease that is rare in children but more common in adults and usually does not spread outside the [...] Read more.
Background: Thymoma is a tumor that originates in the thymus gland, a part of the human body located behind the breastbone. It is a malignant disease that is rare in children but more common in adults and usually does not spread outside the thymus. The exact cause of thymic disease is not known, but it is thought to be more common in people infected with the EBV virus at an early age. Various surgical methods are used in clinical settings to treat thymoma. Expert opinion is very important in the diagnosis of the disease. Recently, next-generation technologies have become increasingly important in disease detection. Today’s early detection systems already use transformer models that are open to technological advances. Methods: What makes this study different is the use of transformer models instead of traditional deep learning models. The data used in this study were obtained from patients undergoing treatment at Fırat University, Department of Thoracic Surgery. The dataset consisted of two types of classes: thymoma disease images and non-thymoma disease images. The proposed approach consists of preprocessing, model training, feature extraction, feature set fusion between models, efficient feature selection, and classification. In the preprocessing step, unnecessary regions of the images were cropped, and the region of interest (ROI) technique was applied. Four types of transformer models (Deit3, Maxvit, Swin, and ViT) were used for model training. As a result of the training of the models, the feature sets obtained from the best three models were merged between the models (Deit3 and Swin, Deit3 and ViT, Deit3 and ViT, Swin and ViT, and Deit3 and Swin and ViT). The combined feature set of the model (Deit3 and ViT) that gave the best performance with fewer features was analyzed using the mRMR feature selection method. The SVM method was used in the classification process. Results: With the mRMR feature selection method, 100% overall accuracy was achieved with feature sets containing fewer features. The cross-validation technique was used to verify the overall accuracy of the proposed approach and 99.22% overall accuracy was achieved in the analysis with this technique. Conclusions: These findings emphasize the added value of the proposed approach in the detection of thymoma. Full article
(This article belongs to the Special Issue Advanced Computer-Aided Diagnosis Using Medical Images)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Sample images from the dataset categories: (<b>a</b>) patients without thymoma, and (<b>b</b>) patients with thymoma.</p>
Full article ">Figure 2
<p>Application of data cropping technique to original data: example subset of images.</p>
Full article ">Figure 3
<p>Application of the ROI technique after data cropping: (<b>a</b>) CT images without thymoma disease, and (<b>b</b>) CT images with thymoma disease.</p>
Full article ">Figure 4
<p>Binary-type classification by SVM [<a href="#B12-diagnostics-14-02169" class="html-bibr">12</a>].</p>
Full article ">Figure 5
<p>Process stage of the transformer models.</p>
Full article ">Figure 6
<p>General design of the proposed hybrid model.</p>
Full article ">Figure 7
<p>Classification performances of ViT models: (<b>a</b>) Deit3/base patch16, (<b>b</b>) MaxViT/base-tf, (<b>c</b>) Swin/base patch4, and (<b>d</b>) ViT/base patch16.</p>
Full article ">Figure 8
<p>Confusion matrices obtained in the classification process of transformer models; (<b>a</b>) Deit3/base patch16, (<b>b</b>) MaxViT/base-tf, (<b>c</b>) Swin/base patch4, and (<b>d</b>) ViT/base patch16.</p>
Full article ">Figure 9
<p>Confusion matrices obtained by combining the feature sets between the models (training/test rate: 0.8/0.2—classifier: SVM): (<b>a</b>) DeiT3 and Swin (W), (<b>b</b>) DeiT3 and ViT (V), (<b>c</b>) Swin and ViT (Y), and (<b>d</b>) DeiT3 and Swin and ViT (Z).</p>
Full article ">Figure 10
<p>Confusion matrices obtained by combining the feature sets between the models (cross validation: k = 10 and classifier: SVM): (<b>a</b>) DeiT3 and Swin (W), (<b>b</b>) DeiT3 and ViT (V), (<b>c</b>) Swin and ViT (Y), and (<b>d</b>) DeiT3 and Swin and ViT (Z).</p>
Full article ">Figure 11
<p>Confusion matrices obtained by SVM classification of the best features selected by mRMR feature selection method (train rate: 0.8, test rate: 0.2): (<b>a</b>) the top 50 features, (<b>b</b>) the top 100 features, (<b>c</b>) the top 200 features, (<b>d</b>) the top 300 features, (<b>e</b>) the top 400 features, (<b>f</b>) the top 500 features, (<b>g</b>) the top 750 features, and (<b>h</b>) the top 1000 features.</p>
Full article ">Figure 12
<p>Confusion matrices obtained by SVM classification of the best features selected by mRMR feature selection method (cross validation/k = 10): (<b>a</b>) the top 50 features, (<b>b</b>) the top 100 features, (<b>c</b>) the top 200 features, (<b>d</b>) the top 300 features, (<b>e</b>) the top 400 features, (<b>f</b>) the top 500 features, (<b>g</b>) the top 750 features, and (<b>h</b>) the top 1000 features.</p>
Full article ">
Back to TopTop