[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (4,159)

Search Parameters:
Keywords = multispectral

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4745 KiB  
Article
The Link between Surface Visible Light Spectral Features and Water–Salt Transfer in Saline Soils—Investigation Based on Soil Column Laboratory Experiments
by Shaofeng Qin, Yong Zhang, Jianli Ding, Jinjie Wang, Lijing Han, Shuang Zhao and Chuanmei Zhu
Remote Sens. 2024, 16(18), 3421; https://doi.org/10.3390/rs16183421 (registering DOI) - 14 Sep 2024
Viewed by 212
Abstract
Monitoring soil salinity with remote sensing is difficult, but knowing the link between saline soil surface spectra, soil water, and salt transport processes might help in modeling for soil salinity monitoring. In this study, we used an indoor soil column experiment, an unmanned [...] Read more.
Monitoring soil salinity with remote sensing is difficult, but knowing the link between saline soil surface spectra, soil water, and salt transport processes might help in modeling for soil salinity monitoring. In this study, we used an indoor soil column experiment, an unmanned aerial vehicle multispectral sensor camera, and a soil moisture sensor to study the water and salt transport process in the soil column under different water addition conditions and investigate the relationship between the soil water and salt transport process and the spectral reflectance of the image on the soil surface. The observation results of the soil column show that the soil water and salt transportation process conforms to the basic transportation law of “salt moves together with water, and when water evaporates, salt is retained in the soil weight”. The salt accumulation phenomenon increases the image spectral reflectance of the surface layer of the soil column, while soil temperature has no effect on the reflectance. As the water percolates down, water and salt accumulate at the bottom of the soil column. The salinity index decreases instantly after the addition of brine and then tends to increase slowly. The experimental results indicate that this work can capture the relationship between the water and salt transport process and remote sensing spectra, which can provide theoretical basis and reference for soil water salinity monitoring. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagram of the indoor soil column experimental setup. (<b>a</b>) shows a schematic diagram of the installation setup of the soil column dimensions, light source, camera, and soil sensor. (<b>b</b>) shows a true-color image of the soil surface as it changes over time and soil salts crystallize and precipitate.</p>
Full article ">Figure 2
<p>Characteristic soil moisture profiles of soil samples from soil columns measured by centrifugation.</p>
Full article ">Figure 3
<p>Comparison results of initial and end surface reflectance of three soil column experiments in three groups of experiments, where (<b>a</b>–<b>c</b>) are the first set of experiments, (<b>d</b>–<b>f</b>) are the second set of experiments, and (<b>g</b>–<b>i</b>) are the third set of experiments.</p>
Full article ">Figure 4
<p>Results of soil moisture content of different soil layers during the experiments of three soil columns in three groups of experiments, where (<b>a</b>–<b>c</b>) is the first set of experiments, (<b>d</b>–<b>f</b>) is the second set of experiments, and (<b>g</b>–<b>i</b>) is the third set of experiments.</p>
Full article ">Figure 5
<p>Soil conductivity results for different soil layers during the three soil column experiments in the three groups of experiments, where (<b>a</b>–<b>c</b>) is the first set of experiments, (<b>d</b>–<b>f</b>) is the second set of experiments, and (<b>g</b>–<b>i</b>) is the third set of experiments.</p>
Full article ">Figure 6
<p>Soil temperature results for different soil layers during the three soil column experiments in the three groups of experiments, where (<b>a</b>–<b>c</b>) is the first set of experiments, (<b>d</b>–<b>f</b>) is the second set of experiments, and (<b>g</b>–<b>i</b>) is the third set of experiments.</p>
Full article ">Figure 7
<p>Results of salinity index S5 with time for each soil column in the three sets of experiments. (<b>a</b>–<b>c</b>) are for the first, second, and third experiments, respectively.</p>
Full article ">Figure 8
<p>The change process of salinity index S5 of soil column A in the first experiment.</p>
Full article ">Figure 9
<p>Cumulative mass loss of each soil column in three groups of experiments. The line color black is for the first group of experiments, red is for the second group of experiments, and blue is for the third group of experiments.</p>
Full article ">
25 pages, 16876 KiB  
Article
Optimization of 3D Printing Parameters of High Viscosity PEEK/30GF Composites
by Dmitry Yu. Stepanov, Yuri V. Dontsov, Sergey V. Panin, Dmitry G. Buslovich, Vladislav O. Alexenko, Svetlana A. Bochkareva, Andrey V. Batranin and Pavel V. Kosmachev
Polymers 2024, 16(18), 2601; https://doi.org/10.3390/polym16182601 (registering DOI) - 14 Sep 2024
Viewed by 220
Abstract
The aim of this study was to optimize a set of technological parameters (travel speed, extruder temperature, and extrusion rate) for 3D printing with a PEEK-based composite reinforced with 30 wt.% glass fibers (GFs). For this purpose, both Taguchi and finite element methods [...] Read more.
The aim of this study was to optimize a set of technological parameters (travel speed, extruder temperature, and extrusion rate) for 3D printing with a PEEK-based composite reinforced with 30 wt.% glass fibers (GFs). For this purpose, both Taguchi and finite element methods (FEM) were utilized. The artificial neural networks (ANNs) were implemented for computer simulation of full-scale experiments. Computed tomography of the additively manufactured (AM) samples showed that the optimal 3D printing parameters were the extruder temperature of 460 °C, the travel speed of 20 mm/min, and the extrusion rate of 4 rpm (the microextruder screw rotation speed). These values correlated well with those obtained by computer simulation using the ANNs. In such cases, the homogeneous micro- and macro-structures were formed with minimal sample distortions and porosity levels within 10 vol.% of both structures. The most likely reason for porosity was the expansion of the molten polymer when it had been squeezed out from the microextruder nozzle. It was concluded that the mechanical properties of such samples can be improved both by changing the 3D printing strategy to ensure the preferential orientation of GFs along the building direction and by reducing porosity via post-printing treatment or ultrasonic compaction. Full article
(This article belongs to the Special Issue Additive Manufacturing of Fibre Reinforced Polymer Composites)
Show Figures

Figure 1

Figure 1
<p>The S/N ratios for different levels of the technological parameters: (<b>a</b>) tensile strength; (<b>b</b>) elastic modulus; (<b>c</b>) elongation at break.</p>
Full article ">Figure 2
<p>The SEM micrographs of the PEEK/30GF composites additively manufactured using the modes presented in <a href="#polymers-16-02601-t002" class="html-table">Table 2</a>.</p>
Full article ">Figure 3
<p>The 3D printing modes of the laboratory experiments in the space of the (input) parameters.</p>
Full article ">Figure 4
<p>The dependences of the mechanical properties of the samples of the PEEK/30 GF composite on the 3D printing parameters (<b>a</b>), as well as both dependences and histograms (<b>b</b>) after verification.</p>
Full article ">Figure 4 Cont.
<p>The dependences of the mechanical properties of the samples of the PEEK/30 GF composite on the 3D printing parameters (<b>a</b>), as well as both dependences and histograms (<b>b</b>) after verification.</p>
Full article ">Figure 5
<p>The parameters’ space and the result of checking the 3D-printing modes for compliance with the minimum acceptable property values.</p>
Full article ">Figure 6
<p>The 3D printing modes and a priori knowledge, as well as the SOP area, drawn using the RBFNN model: (<b>a</b>) spread = 0.3, goal = 0.001, the training sample size of 66 vectors; (<b>b</b>) spread = 0.3, goal = 0.01, the training sample size of 66 experimental + 54 a priori vectors.</p>
Full article ">Figure 7
<p>The experimental modes and a priori knowledge, as well as the SOP area, drawn using the FFNN model: (<b>a</b>) 4 hidden layer neurons, the sample size of 66 experimental + 132 synthesized vectors; (<b>b</b>) 6 hidden layer neurons, the sample size of 66 experimental + 54 prior + 240 synthesized vectors.</p>
Full article ">Figure 8
<p>Results of models’ verification within priory knowledge planes as a function of the size of experimental and prior vectors of the properties.</p>
Full article ">Figure 9
<p>Schematic locations of pores with the diameters of 20 µm (<b>a</b>), 100 µm (<b>b</b>), and from 20 to 100 µm (<b>c</b>) in the computational domains at the porosity of 30%.</p>
Full article ">Figure 10
<p>The elastic modulus versus porosity dependences for pores with different diameters <span class="html-italic">d</span>.</p>
Full article ">Figure 11
<p>The stress distribution surfaces over the representative volume in the presence of pores with the diameters of 20 µm (<b>a</b>), 100 µm (<b>b</b>), and from 20 to 100 µm (<b>c</b>) at a porosity of 30%.</p>
Full article ">Figure 12
<p>The three-dimensional micro-CT views of the samples, from both supporting table (<b>a</b>–<b>c</b>) and 3D-printing head (<b>d</b>–<b>f</b>) sides; mode 12 (<b>a</b>,<b>d</b>); mode 14 (<b>b</b>,<b>e</b>); mode 10 (<b>c</b>,<b>f</b>).</p>
Full article ">Figure 13
<p>The orthogonal projections of the samples near the fracture surfaces at the image (slice) sizes of 7.5–8.0 mm (<b>a</b>–<b>c</b>), 7.5–4.5 mm (<b>d</b>–<b>f</b>), and 8.0–4.5 mm (<b>g</b>–<b>i</b>). Red denotes Z- axis section; Blue denotes X-axis section; Green denotes Y-axis section.</p>
Full article ">Figure 14
<p>The comparative results of assessing the cross-sectional areas of the samples depending on the position of the height section (along the Z axis): mode 12 (sample No.19); mode 14 (sample No.28); mode 10 (sample No.30).</p>
Full article ">Figure 15
<p>Visualizations of a full tomogram (<b>a</b>) and a cut-out VOI (<b>b</b>).</p>
Full article ">Figure 16
<p>Visualizations of the areas (sections) selected for each of the modes to calculate the porosity levels for the images with sizes of 4.5–7.5 mm (<b>a</b>–<b>c</b>).</p>
Full article ">Figure 17
<p>The orthogonal projections of an individual PEEK granule. Red denotes Z- axis section; Blue denotes X-axis section; Green denotes Y-axis section.</p>
Full article ">
20 pages, 7101 KiB  
Article
Tracking Evapotranspiration Patterns on the Yinchuan Plain with Multispectral Remote Sensing
by Junzhen Meng, Xiaoquan Yang, Zhiping Li, Guizhang Zhao, Peipei He, Yabing Xuan and Yunfei Wang
Sustainability 2024, 16(18), 8025; https://doi.org/10.3390/su16188025 - 13 Sep 2024
Viewed by 320
Abstract
Evapotranspiration (ET) is a critical component of the hydrological cycle, and it has a decisive impact on the ecosystem balance in arid and semi-arid regions. The Yinchuan Plain, located in the Gobi of Northwest China, has a strong surface ET, which has a [...] Read more.
Evapotranspiration (ET) is a critical component of the hydrological cycle, and it has a decisive impact on the ecosystem balance in arid and semi-arid regions. The Yinchuan Plain, located in the Gobi of Northwest China, has a strong surface ET, which has a significant impact on the regional water resource cycle. However, there is a current lack of high-resolution evapotranspiration datasets and a substantial amount of time is required for long-time series remote sensing evapotranspiration estimation. In order to assess the ET pattern in this region, we obtained the actual ET (ETa) of the Yinchuan Plain between 1987 and 2020 using the Google Earth Engine (GEE) platform. Specifically, we used Landsat TM+/OLI remote sensing imagery and the GEE Surface Energy Balance Model (geeSEBAL) to analyze the spatial distribution pattern of ET over different seasons. We then reproduced the interannual variation in ET from 1987 to 2020, and statistically analyzed the distribution patterns and contributions of ET with regard to different land use types. The results show that (1) the daily ETa of the Yinchuan Plain is the highest in the central lake wetland area in spring, with a maximum value of 4.32 mm day−1; in summer, it is concentrated around the croplands and water bodies, with a maximum value of 6.90 mm day−1; in autumn and winter, it is mainly concentrated around the water bodies and impervious areas, with maximum values of 3.93 and 1.56 mm day−1, respectively. (2) From 1987 to 2020, the ET of the Yinchuan Plain showed an obvious upward and downward trend in some areas with significant land use changes, but the overall ET of the region remained relatively stable without dramatic fluctuations. (3) The ETa values for different land use types in the Yinchuan Plain region are ranked as follows: water body > cultivated land > impervious > grassland > bare land. Our results showed that geeSEBAL is highly applicable in the Yinchuan Plain area. It allows for the accurate and detailed inversion of ET and has great potential for evaluating long-term ET in data-scarce areas due to its low meteorological sensitivity, which facilitates the study of the regional hydrological cycle and water governance. Full article
(This article belongs to the Section Sustainable Water Management)
Show Figures

Figure 1

Figure 1
<p>Location of the meteorological stations and Landsat images that were used to illustrate the land cover conditions in the Yinchuan Plain area.</p>
Full article ">Figure 2
<p>Comparison between meteorological stations with large ET and geeSEBAL ET. Diamond-shaped points represent outliers lying outside the 150% inter-quartile range.</p>
Full article ">Figure 3
<p>Comparison between ET<sub>p</sub> and geeSEBAL ET (<b>a</b>–<b>d</b>). Compared with large-scale evapotranspiration, R<sup>2</sup> has significantly improved, indicating that the model correlation is influenced by external factors.</p>
Full article ">Figure 4
<p>Comparison between small ET and geeSEBAL ET ((<b>a</b>–<b>f</b>) presents different meteorological stations in the Yinchuan Plain).</p>
Full article ">Figure 5
<p>Seasonal ET<sub>a</sub> changes in the Yinchuan Plain. (<b>a</b>) Spring ET distribution; (<b>b</b>) Summer ET distribution; (<b>c</b>) Autum ET distribution; (<b>d</b>) Winter ET distribution.</p>
Full article ">Figure 6
<p>Trends in ET<sub>a</sub> on the Yinchuan Plain from 1987 to 2020.</p>
Full article ">Figure 7
<p>Area of Yinchuan Plain land use types.</p>
Full article ">Figure 8
<p>ET<sub>a</sub> in different subsurface types.</p>
Full article ">Figure 9
<p>Comparison of remote sensing imagery (<b>a</b>), land use classification (<b>b</b>), and ET imagery (<b>c</b>) on the Yinchuan Plain.</p>
Full article ">Figure 10
<p>Impervious areas misclassified in some intersecting land types. water bodies are identified as impervious areas (red color).</p>
Full article ">Figure 11
<p>ET<sub>a</sub> contribution of different subsurface types.</p>
Full article ">Figure 12
<p>geeSEBAL with the batch image estimation mode.</p>
Full article ">
15 pages, 10244 KiB  
Article
Identification of Floating Green Tide in High-Turbidity Water from Sentinel-2 MSI Images Employing NDVI and CIE Hue Angle Thresholds
by Lin Wang, Qinghui Meng, Xiang Wang, Yanlong Chen, Xinxin Wang, Jie Han and Bingqiang Wang
J. Mar. Sci. Eng. 2024, 12(9), 1640; https://doi.org/10.3390/jmse12091640 - 13 Sep 2024
Viewed by 180
Abstract
Remote sensing technology is widely used to obtain information on floating green tides, and thresholding methods based on indices such as the normalized difference vegetation index (NDVI) and the floating algae index (FAI) play an important role in such studies. However, as the [...] Read more.
Remote sensing technology is widely used to obtain information on floating green tides, and thresholding methods based on indices such as the normalized difference vegetation index (NDVI) and the floating algae index (FAI) play an important role in such studies. However, as the methods are influenced by many factors, the threshold values vary greatly; in particular, the error of data extraction clearly increases in situations of high-turbidity water (HTW) (NDVI > 0). In this study, high spatial resolution, multispectral images from the Sentinel-2 MSI mission were used as the data source. It was found that the International Commission on Illumination (CIE) hue angle calculated using remotely sensed equivalent multispectral reflectance data and the RGB method is extremely effective in distinguishing floating green tides from areas of HTW. Statistical analysis of Sentinel-2 MSI images showed that the threshold value of the hue angle that can effectively eliminate the effect of HTW is 218.94°. A test demonstration of the method for identifying the floating green tide in HTW in a Sentinel-2 MSI image was carried out using the identified threshold values of NDVI > 0 and CIE hue angle < 218.94°. The demonstration showed that the method effectively eliminates misidentification caused by HTW pixels (NDVI > 0), resulting in better consistency of the identification of the floating green tide and its distribution in the true color image. The method enables rapid and accurate extraction of information on floating green tide in HTW, and offers a new solution for the monitoring and tracking of green tides in coastal areas. Full article
(This article belongs to the Section Marine Environmental Science)
Show Figures

Figure 1

Figure 1
<p>The spatial distribution of the in situ optical observation stations and the spatial coverage of the satellite data used in this study.</p>
Full article ">Figure 2
<p>The in situ measured hyperspectral reflectance of typical water bodies and floating green tides of different coverages.</p>
Full article ">Figure 3
<p>Scatterplots showing the relationship between the NDVI value and the hue angle calculated using (<b>a</b>) the in situ measured hyperspectral reflectance, and the corresponding Sentinel-2 MSI multispectral reflectance in (<b>b</b>) the five visible bands and (<b>c</b>) with the three RGB bands.</p>
Full article ">Figure 4
<p>Sentinel-2 MSI images (23 May 2023) and the corresponding pixel identification results (red area) based on NDVI &gt; 0 for HTW floating green tides.</p>
Full article ">Figure 5
<p>The distribution of pixel counts at different hue angles when NDVI &gt; 0.</p>
Full article ">Figure 6
<p>(<b>a</b>,<b>d</b>,<b>g</b>) Sentinel-2 MSI true-color image obtained on 7 June 2022, 23 May 2023, and 1 June 2024, (<b>b</b>,<b>e</b>,<b>h</b>) identification results obtained using the traditional NDVI thresholding method (NDVI &gt; 0), and (<b>c</b>,<b>f</b>,<b>i</b>) identification results obtained using the method proposed in this study (NDVI &gt; 0 and hue angle &lt; 218.94°).</p>
Full article ">Figure 7
<p>The distribution of the pixel counts within the variation interval of sensitivity factors, including (<b>a</b>) hue angle, (<b>b</b>–<b>h</b>) reflectance values in B2–B8, (<b>i</b>) B4/B3 reflectance ratio.</p>
Full article ">
24 pages, 5237 KiB  
Article
Effect of the Bioprotective Properties of Lactic Acid Bacteria Strains on Quality and Safety of Feta Cheese Stored under Different Conditions
by Angeliki Doukaki, Olga S. Papadopoulou, Antonia Baraki, Marina Siapka, Ioannis Ntalakas, Ioannis Tzoumkas, Konstantinos Papadimitriou, Chrysoula Tassou, Panagiotis Skandamis, George-John Nychas and Nikos Chorianopoulos
Microorganisms 2024, 12(9), 1870; https://doi.org/10.3390/microorganisms12091870 - 10 Sep 2024
Viewed by 419
Abstract
Lately, the inclusion of additional lactic acid bacteria (LAB) strains to cheeses is becoming more popular since they can affect cheese’s nutritional, technological, and sensory properties, as well as increase the product’s safety. This work studied the effect of Lactiplantibacillus pentosus L33 and [...] Read more.
Lately, the inclusion of additional lactic acid bacteria (LAB) strains to cheeses is becoming more popular since they can affect cheese’s nutritional, technological, and sensory properties, as well as increase the product’s safety. This work studied the effect of Lactiplantibacillus pentosus L33 and Lactiplantibacillus plantarum L125 free cells and supernatants on feta cheese quality and Listeria monocytogenes fate. In addition, rapid and non-invasive techniques such as Fourier transform infrared (FTIR) and multispectral imaging (MSI) analysis were used to classify the cheese samples based on their sensory attributes. Slices of feta cheese were contaminated with 3 log CFU/g of L. monocytogenes, and then the cheese slices were sprayed with (i) free cells of the two strains of the lactic acid bacteria (LAB) in co-culture (F, ~5 log CFU/g), (ii) supernatant of the LAB co-culture (S) and control (C, UHT milk) or wrapped with Na-alginate edible films containing the pellet (cells, FF) or the supernatant (SF) of the LAB strains. Subsequently, samples were stored in air, in brine, or in vacuum at 4 and 10 °C. During storage, microbiological counts, pH, and water activity (aw) were monitored while sensory assessment was conducted. Also, in every sampling point, spectral data were acquired by means of FTIR and MSI techniques. Results showed that the initial microbial population of Feta was ca. 7.6 log CFU/g and consisted of LAB (>7 log CFU/g) and yeast molds in lower levels, while no Enterobacteriaceae were detected. During aerobic, brine, and vacuum storage for both temperatures, pathogen population was slightly postponed for S and F samples and reached lower levels compared to the C ones. The yeast mold population was slightly delayed in brine and vacuum packaging. For aerobic storage at 4 °C, an elongation in the shelf life of F samples by 4 days was observed compared to C and S samples. At 10 °C, the shelf life of both F and S samples was extended by 13 days compared to C samples. FTIR and MSI analyses provided reliable estimations of feta quality using the PLS-DA method, with total accuracy (%) ranging from 65.26 to 84.31 and 60.43 to 89.12, respectively. In conclusion, the application of bioprotective LAB strains can result in the extension of feta’s shelf life and provide a mild antimicrobial action against L. monocytogenes and spoilage microbiota. Furthermore, the findings of this study validate the effectiveness of FTIR and MSI techniques, in tandem with data analytics, for the rapid assessment of the quality of feta samples. Full article
(This article belongs to the Section Food Microbiology)
Show Figures

Figure 1

Figure 1
<p>Population of the examined microorganisms and pH values in aerobic storage of non-inoculated feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) Total viable counts, (<b><span style="color:#5B9BD5">•</span></b>) cocci/streptococci, (<b><span style="color:#ED7D31">•</span></b>) lactic acid bacteria, (<b><span style="color:#2F5496">•</span></b>) yeasts and molds are represented by a continuous line (-). pH values (<b><span style="color:#000000">•</span></b>) are indicated in the secondary axis and are represented with a dotted line (…). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 2
<p>Population of total viable counts (TVC) and <span class="html-italic">Listeria monocytogenes</span> in aerobic storage of inoculated feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) TVC is represented by a continuous line (-), and <span class="html-italic">Listeria monocytogenes</span> (<b><span style="color:red">•</span></b>) is represented in dashed lines (---). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 3
<p>Population of the examined microorganisms and pH values in brine storage of non-inoculated Feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) Total viable counts, (<b><span style="color:#5B9BD5">•</span></b>) cocci/streptococci, (<b><span style="color:#ED7D31">•</span></b>) lactic acid bacteria, (<b><span style="color:#2F5496">•</span></b>) yeasts and molds are represented by a continuous line (-). pH values (<b><span style="color:#000000">•</span></b>) are indicated in the secondary axis and are represented with a dotted line (…). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05), except from TVC between C and both F and S samples and cocci/streptococci at 10 °C between C and S samples.</p>
Full article ">Figure 4
<p>Population of total viable counts (TVC) and <span class="html-italic">Listeria monocytogenes</span> in brine storage of inoculated feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) TVC is represented by a continuous line (-) and <span class="html-italic">Listeria monocytogenes</span> (<b><span style="color:red">•</span></b>) is represented in dashed lines (---). Statistically important differences (<span class="html-italic">p</span> &lt; 0.05) were observed for TVC between C and both F and S samples.</p>
Full article ">Figure 5
<p>Population of the examined microorganisms and pH values in vacuum storage of non-inoculated feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) Total viable counts, (<b><span style="color:#5B9BD5">•</span></b>) cocci/streptococci, (<b><span style="color:#ED7D31">•</span></b>) lactic acid bacteria, (<b><span style="color:#2F5496">•</span></b>) yeasts and molds are represented by a continuous line (-). pH values (<b><span style="color:#000000">•</span></b>) are indicated in the secondary axis and are represented with a dotted line (…). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05) except from cocci/streptococci at 10 °C.</p>
Full article ">Figure 6
<p>Population of total viable counts (TVC) and <span class="html-italic">Listeria monocytogenes</span> in vacuum storage of inoculated feta cheese samples (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): F samples stored at 4 °C, (<b>c</b>): S samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): F samples stored at 10 °C, (<b>f</b>): S samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) TVC is represented by a continuous line (-) and <span class="html-italic">Listeria monocytogenes</span> (<b><span style="color:red">•</span></b>) is represented in dashed lines (---). Statistically important differences (<span class="html-italic">p</span> &lt; 0.05) were observed for <span class="html-italic">L. monocytogenes</span> between C and S samples.</p>
Full article ">Figure 7
<p>Population of the examined microorganisms and pH values in vacuum storage of non-inoculated feta cheese samples with edible film (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): FF samples stored at 4 °C, (<b>c</b>): SF samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): FF samples stored at 10 °C, (<b>f</b>): SF samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) Total viable counts, (<b><span style="color:#5B9BD5">•</span></b>) cocci/streptococci, (<b><span style="color:#ED7D31">•</span></b>) lactic acid bacteria, (<b><span style="color:#2F5496">•</span></b>) yeasts and molds are represented by a continuous line (-). pH values (<b><span style="color:#000000">•</span></b>) are indicated in the secondary axis and are represented with a dotted line (…). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05) except from cocci/streptococci at 4 °C for C and FF samples.</p>
Full article ">Figure 8
<p>Population of total viable counts (TVC) and <span class="html-italic">Listeria monocytogenes</span> in vacuum storage of inoculated feta cheese samples with edible film (mean values ± standard deviations) for (<b>a</b>): C samples stored at 4 °C, (<b>b</b>): FF samples stored at 4 °C, (<b>c</b>): SF samples stored at 4 °C, (<b>d</b>): C samples stored at 10 °C, (<b>e</b>): FF samples stored at 10 °C, (<b>f</b>): SF samples stored at 10 °C. (<b><span style="color:#3DF729">•</span></b>) TVC is represented by a continuous line (-) and <span class="html-italic">Listeria monocytogenes</span> (<b><span style="color:red">•</span></b>) is represented in dashed lines (---). No statistically important differences were observed (<span class="html-italic">p</span> &gt; 0.05).</p>
Full article ">Figure 9
<p>Sensory scores of aerobic (<b>a</b>), brine (<b>b</b>), vacuum (<b>c</b>), vacuum with edible films (<b>d</b>), storage of feta cheese samples during storage at 4 and 10 °C for appearance (Ap), texture (Te), aroma (Ar), taste (Ts), and total score (T). Dashed lines represent the end of shelf life.</p>
Full article ">Figure 10
<p>Raw Fourier transform infrared (FTIR) spectra, in the selected wavenumber range 1800–900 cm<sup>−1</sup>, corresponding to feta cheese samples stored under aerobic conditions (<b>a</b>), brine (<b>b</b>), vacuum packaging (<b>c</b>), and with edible film under vacuum packaging (<b>d</b>). Fresh samples (Day 0) are represented in black solid line (──), spoiled samples at 4 °C in green dashed line (<span style="color:#00B050">----</span>), and spoiled samples at 10 °C in red dashed line (<span style="color:red">----</span>).</p>
Full article ">Figure 11
<p>Indicative multispectral imaging (MSI) reflectance spectra (mean ± standard deviation) from the benchtop-MSI instrument, corresponding to feta cheese samples stored in aerobic conditions (<b>a</b>), brine (<b>b</b>), vacuum (<b>c</b>), and vacuum with edible film (<b>d</b>). Fresh samples (Day 0) are represented in black solid line (──), spoiled samples at 4 °C in green dashed line (<span style="color:#00B050">----</span>), and spoiled samples at 10 °C in red dashed line (<span style="color:red">----</span>).</p>
Full article ">Figure 12
<p>Indicative multispectral imaging (MSI) reflectance spectra (mean ± standard deviation) from the portable-MSI instrument, corresponding to feta cheese samples stored in aerobic conditions (<b>a</b>), brine (<b>b</b>), vacuum (<b>c</b>), and vacuum with edible film (<b>d</b>). Fresh samples (Day 0) are represented in black solid line (──), spoiled samples at 4 °C in green dashed line (<span style="color:#00B050">----</span>), and spoiled samples at 10 °C in red dashed line (<span style="color:red">----</span>).</p>
Full article ">
18 pages, 5655 KiB  
Article
Use of Phenomics in the Selection of UAV-Based Vegetation Indices and Prediction of Agronomic Traits in Soybean Subjected to Flooding
by Charleston dos Santos Lima, Darci Francisco Uhry Junior, Ivan Ricardo Carvalho and Christian Bredemeier
AgriEngineering 2024, 6(3), 3261-3278; https://doi.org/10.3390/agriengineering6030186 - 10 Sep 2024
Viewed by 336
Abstract
Flooding is a frequent environmental stress that reduces soybean growth and grain yield in many producing areas in the world, such as the United States, Southeast Asia, and Southern Brazil. In these regions, soybean is frequently cultivated in lowland areas in crop rotation [...] Read more.
Flooding is a frequent environmental stress that reduces soybean growth and grain yield in many producing areas in the world, such as the United States, Southeast Asia, and Southern Brazil. In these regions, soybean is frequently cultivated in lowland areas in crop rotation with rice, which provides numerous technical, economic, and environmental benefits. In this context, the identification of the most important spectral variables for the selection of more flooding-tolerant soybean genotypes is a primary demand within plant phenomics, with faster and more reliable results enabled using multispectral sensors mounted on unmanned aerial vehicles (UAVs). Accordingly, this research aimed to identify the optimal UAV-based multispectral vegetation indices for characterizing the response of soybean genotypes subjected to flooding and to test the best linear model fit in predicting tolerance scores, relative maturity group, biomass, and grain yield based on phenomics analysis. Forty-eight soybean cultivars were sown in two environments (flooded and non-flooded). Ground evaluations and UAV-image acquisition were conducted at 13, 38, and 69 days after flooding and at grain harvest, corresponding to the phenological stages V8, R1, R3, and R8, respectively. Data were subjected to variance component analysis and genetic parameters were estimated, with stepwise regression applied for each agronomic variable of interest. Our results showed that vegetation indices behave differently in their suitability for more tolerant genotype selection. Using this approach, phenomics analysis efficiently identified indices with high heritability, accuracy, and genetic variation (>80%), as observed for MSAVI, NDVI, OSAVI, SAVI, VEG, MGRVI, EVI2, NDRE, GRVI, BNDVI, and RGB index. Additionally, variables predicted based on estimated genetic data via phenomics had determination coefficients above 0.90, enabling the reduction in the number of important variables within the linear model. Full article
(This article belongs to the Section Remote Sensing in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location and experimental design used to evaluate 48 soybean genotypes subjected to two environments (flooded or non-flooded), during the 2022/23 growing season.</p>
Full article ">Figure 2
<p>Visual rating scale referring to the flooding tolerance scores (FTS). 1: no symptoms of injury and 9: death of all plants in the plot.</p>
Full article ">Figure 3
<p>Platforms and sensors used to collect spectral data.</p>
Full article ">Figure 4
<p>Workflow diagram of the procedures used in the present study.</p>
Full article ">Figure 5
<p>BLUP analysis with predicted genetic data for the MSAVI index.</p>
Full article ">Figure 6
<p>BLUP analysis with predicted genetic data for the MGRVI index.</p>
Full article ">Figure 7
<p>BLUP with the predicted genetic data for the RGB index.</p>
Full article ">Figure 8
<p>BLUP with the predicted genetic data for the EXG index.</p>
Full article ">Figure 9
<p>BLUP with estimated genetic data for plant height (cm).</p>
Full article ">Figure 10
<p>BLUP with genetic data estimated for shoot biomass (kg ha<sup>−1</sup>).</p>
Full article ">Figure 11
<p>BLUP with predicted genetic data for grain yield (kg ha<sup>−1</sup>).</p>
Full article ">Figure 12
<p>Relationship between observed and predicted genetic data (RELM-BLUP analysis) using linear models for flooding tolerance score—FTS (<b>a</b>), shoot biomass (<b>b</b>), and grain yield (<b>c</b>).</p>
Full article ">
17 pages, 8025 KiB  
Article
Using Multispectral Data from UAS in Machine Learning to Detect Infestation by Xylotrechus chinensis (Chevrolat) (Coleoptera: Cerambycidae) in Mulberries
by Christina Panopoulou, Athanasios Antonopoulos, Evaggelia Arapostathi, Myrto Stamouli, Anastasios Katsileros and Antonios Tsagkarakis
Agronomy 2024, 14(9), 2061; https://doi.org/10.3390/agronomy14092061 - 9 Sep 2024
Viewed by 295
Abstract
The tiger longicorn beetle, Xylotrechus chinensis Chevrolat (Coleoptera: Cerambycidae), has posed a significant threat to mulberry trees in Greece since its invasion in 2017, which may be associated with global warming. Detection typically relies on observing adult emergence holes on the bark or [...] Read more.
The tiger longicorn beetle, Xylotrechus chinensis Chevrolat (Coleoptera: Cerambycidae), has posed a significant threat to mulberry trees in Greece since its invasion in 2017, which may be associated with global warming. Detection typically relies on observing adult emergence holes on the bark or dried branches, indicating severe damage. Addressing pest threats linked to global warming requires efficient, targeted solutions. Remote sensing provides valuable, swift information on vegetation health, and combining these data with machine learning techniques enables early detection of pest infestations. This study utilized airborne multispectral data to detect infestations by X. chinensis in mulberry trees. Variables such as mean NDVI, mean NDRE, mean EVI, and tree crown area were calculated and used in machine learning models, alongside data on adult emergence holes and temperature. Trees were classified into two categories, infested and healthy, based on X. chinensis infestation. Evaluated models included Random Forest, Decision Tree, Gradient Boosting, Multi-Layer Perceptron, K-Nearest Neighbors, and Naïve Bayes. Random Forest proved to be the most effective predictive model, achieving the highest scores in accuracy (0.86), precision (0.84), recall (0.81), and F-score (0.82), with Gradient Boosting performing slightly lower. This study highlights the potential of combining remote sensing and machine learning for early pest detection, promoting timely interventions, and reducing environmental impacts. Full article
(This article belongs to the Special Issue Pests, Pesticides, Pollinators and Sustainable Farming)
Show Figures

Figure 1

Figure 1
<p>Distribution of <span class="html-italic">X. chinensis</span> in Europe. The species is present in Spain, Italy, and Greece (yellow dot), while in France it is transient (purple dot) (EPPO Global Database, 2023, <a href="http://www.gd.eppo.int" target="_blank">www.gd.eppo.int</a>, accessed on 30 June 2024).</p>
Full article ">Figure 2
<p>Adult of <span class="html-italic">X. chinensis</span> on the trunk of a mulberry tree in Agricultural University of Athens.</p>
Full article ">Figure 3
<p>Symptoms of the pest damage in mulberry trees in the orchard of the Agricultural University of Athens, Greece. (<b>A</b>). Adult emergence holes of X. <span class="html-italic">chinensis</span> on the bark of a mulberry tree (<b>B</b>). Bark discoloration by the activity of the larvae of the pest (<b>C</b>). Dried sprouts on a mulberry tree in the orchard of Agricultural University of Athens.</p>
Full article ">Figure 4
<p>Quadcopter “Mera” (UcanDrone S.A., Koropi Attica, Greece) with the attached multispectral camera, MicaSense RedEdge MX (AgEagle Aerial Systems Inc., Wichita, KS, USA).</p>
Full article ">Figure 5
<p>Orthomosaic map of the mulberry orchard (flight of 28 June 2023).</p>
Full article ">Figure 6
<p>Classified output of the Object-Based Classification for the airborne data of the 28 June 2023 flight.</p>
Full article ">Figure 7
<p>Correlation matrix of the variables depicting the relationship between them. The most statistically significant linear correlation is found between the two vegetation indices, NDVI and NDRE (r = 0.74).</p>
Full article ">Figure 8
<p>Evaluation of the six algorithms based on accuracy, precision, recall, and F1 Score.</p>
Full article ">Figure 9
<p>Importance of variables per learning algorithm based on the training data.</p>
Full article ">Figure 10
<p>Confusion matrices for the machine-learning algorithms.</p>
Full article ">Figure 11
<p>ROC curves of the six models.</p>
Full article ">
21 pages, 10577 KiB  
Article
Evaluation of Sugarcane Crop Growth Monitoring Using Vegetation Indices Derived from RGB-Based UAV Images and Machine Learning Models
by P. P. Ruwanpathirana, Kazuhito Sakai, G. Y. Jayasinghe, Tamotsu Nakandakari, Kozue Yuge, W. M. C. J. Wijekoon, A. C. P. Priyankara, M. D. S. Samaraweera and P. L. A. Madushanka
Agronomy 2024, 14(9), 2059; https://doi.org/10.3390/agronomy14092059 - 9 Sep 2024
Viewed by 314
Abstract
Crop monitoring with unmanned aerial vehicles (UAVs) has the potential to reduce field monitoring costs while increasing monitoring frequency and improving efficiency. However, the utilization of RGB-based UAV imagery for crop-specific monitoring, especially for sugarcane, remains limited. This work proposes a UAV platform [...] Read more.
Crop monitoring with unmanned aerial vehicles (UAVs) has the potential to reduce field monitoring costs while increasing monitoring frequency and improving efficiency. However, the utilization of RGB-based UAV imagery for crop-specific monitoring, especially for sugarcane, remains limited. This work proposes a UAV platform with an RGB camera as a low-cost solution to monitor sugarcane fields, complementing the commonly used multi-spectral methods. This new approach optimizes the RGB vegetation indices for accurate prediction of sugarcane growth, providing many improvements in scalable crop-management methods. The images were captured by a DJI Mavic Pro drone. Four RGB vegetation indices (VIs) (GLI, VARI, GRVI, and MGRVI) and the crop surface model plant height (CSM_PH) were derived from the images. The fractional vegetation cover (FVC) values were compared by image classification. Sugarcane plant height predictions were generated using two machine learning (ML) algorithms—multiple linear regression (MLR) and random forest (RF)—which were compared across five predictor combinations (CSM_PH and four VIs). At the early stage, all VIs showed significantly lower values than later stages (p < 0.05), indicating an initial slow progression of crop growth. MGRVI achieved a classification accuracy of over 94% across all growth phases, outperforming traditional indices. Based on the feature rankings, VARI was the least sensitive parameter, showing the lowest correlation (r < 0.5) and mutual information (MI < 0.4). The results showed that the RF and MLR models provided better predictions for plant height. The best estimation results were observed withthe combination of CSM_PH and GLI utilizing RF model (R2 = 0.90, RMSE = 0.37 m, MAE = 0.27 m, and AIC = 21.93). This study revealed that VIs and the CSM_PH derived from RGB images captured by UAVs could be useful in monitoring sugarcane growth to boost crop productivity. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

Figure 1
<p>Location of the study area: (<b>a</b>) Google location map of Sri Lanka, and (<b>b</b>) map of Galoya Plantations, Hingurana, Sri Lanka.</p>
Full article ">Figure 2
<p>(<b>a</b>) DJI Mavic Pro drone mounted with RGB bands along with its controlling mechanism (source: <a href="https://www.dji.com" target="_blank">https://www.dji.com</a>, accessed on 2 August 2021) and (<b>b</b>) artificial markings for GCPs measurement using GNSS receiver.</p>
Full article ">Figure 3
<p>Workflow for flight planning and image acquisition.</p>
Full article ">Figure 4
<p>The conceptual framework developed for the whole study (AOI, area of interest; DSM, digital surface model; DTM, digital terrain model; CSM, crop surface model; PH, plant height; VI, vegetation index; <span class="html-italic">r</span>, Pearson correlation of coefficient; MI, mutual information; MLR, multiple linear regression and RF, random forest).</p>
Full article ">Figure 5
<p>RGB mosaic images at different growth stages (F1 = tillering stage, F2 = early grand growth stage, F3 = later grand growth stage, F4 = ripening stage).</p>
Full article ">Figure 6
<p>The site appearance of a barren area in the field.</p>
Full article ">Figure 7
<p>Vegetation pattern distribution based on the vegetation indices at the later grand growth stage: (<b>a</b>) GLI, (<b>b</b>) VARI, (<b>c</b>) GRVI, (<b>d</b>) MGRVI. Red areas are row spaces, inter-plot regions, and barren areas; green areas are vegetation.</p>
Full article ">Figure 8
<p>Crop growth development is based on the vegetation cover. (F1, tillering stage; F2, early grand growth stage; F3, later grand growth stage; F4, ripening stage).</p>
Full article ">Figure 9
<p>Plant height model prediction results for multiple linear regression (MLR) and random forest (RF) algorithms:(<b>A</b>) coefficient of determination, (<b>B</b>) root mean square error (RMSE) and (<b>C</b>) mean absolute error (MAE). The models were predicted using four variable combinations: (a) single best (CSM_PH), (b) two best (CSM_PH + GLI), (c) three best (CSM_PH + GLI + GRVI), and (d) four best (CSM_PH + GLI + GRVI + MGRVI).</p>
Full article ">Figure 10
<p>Akaike’s information criterion (AIC) results for the plant height prediction results using the RF models; (1) single best (CSM_PH), (2) two best (CSM_PH + GLI), (3) three best (CSM_PH + GLI + GRVI), and (4) four best (CSM_PH + GLI + GRVI + MGRVI).</p>
Full article ">Figure 11
<p>Scatter plot of RF-predicted plant height vs. field-observed plant height for RF model, and CSM-derived plant height vs. field-measured plant height. The RF plots are based on the selection of the two best variables (CSM_PH + GLI) combination (Model 2). The dashed line is a 1:1 line.</p>
Full article ">
15 pages, 4447 KiB  
Article
Spectral Reflectance Estimation from Camera Response Using Local Optimal Dataset and Neural Networks
by Shoji Tominaga and Hideaki Sakai
J. Imaging 2024, 10(9), 222; https://doi.org/10.3390/jimaging10090222 - 9 Sep 2024
Viewed by 300
Abstract
In this study, a novel method is proposed to estimate surface-spectral reflectance from camera responses that combine model-based and training-based approaches. An imaging system is modeled using the spectral sensitivity functions of an RGB camera, spectral power distributions of multiple light sources, unknown [...] Read more.
In this study, a novel method is proposed to estimate surface-spectral reflectance from camera responses that combine model-based and training-based approaches. An imaging system is modeled using the spectral sensitivity functions of an RGB camera, spectral power distributions of multiple light sources, unknown surface-spectral reflectance, additive noise, and a gain parameter. The estimation procedure comprises two main stages: (1) selecting the local optimal reflectance dataset from a reflectance database and (2) determining the best estimate by applying a neural network to the local optimal dataset only. In stage (1), the camera responses are predicted for the respective reflectances in the database, and the optimal candidates are selected in the order of lowest prediction error. In stage (2), most reflectance training data are obtained by a convex linear combination of local optimal data using weighting coefficients based on random numbers. A feed-forward neural network with one hidden layer is used to map the observation space onto the spectral reflectance space. In addition, the reflectance estimation is repeated by generating multiple sets of random numbers, and the median of a set of estimated reflectances is determined as the final estimate of the reflectance. Experimental results show that the estimation accuracies exceed those of other methods. Full article
Show Figures

Figure 1

Figure 1
<p>Conceptual diagram of our image acquisition system [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>].</p>
Full article ">Figure 2
<p>Database of surface-spectral reflectance.</p>
Full article ">Figure 3
<p>Architecture of the feedforward neural network with a structure of <span class="html-italic">m-N-n.</span>.</p>
Full article ">Figure 4
<p>Overall flow of the proposed method for estimating spectral reflectance in three steps.</p>
Full article ">Figure 5
<p>Relative RGB spectral sensitivity functions of the Apple iPhone 6s [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>].</p>
Full article ">Figure 6
<p>Spectral power distributions of seven LED light sources used in current experiments [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>].</p>
Full article ">Figure 7
<p>Estimation results of the spectral reflectances for the 24 color checkers when applying the proposed method to the observations using the iPhone 6s. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24 color checkers.</p>
Full article ">Figure 8
<p>Comparison of the average RMSEs between the proposed method and the other methods. The symbols of Wiener, LMMSE, L_Wiener, L_LMMSE, Lp, and Qp represent the six estimation methods of (1) original Wiener [<a href="#B13-jimaging-10-00222" class="html-bibr">13</a>], (2) original LMMSE [<a href="#B13-jimaging-10-00222" class="html-bibr">13</a>], (3) local Wiener [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>], (4) local LMMSE [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>], (5) linear programming [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>], and (6) quadratic programming [<a href="#B14-jimaging-10-00222" class="html-bibr">14</a>], respectively.</p>
Full article ">Figure 9
<p>Estimation results of the spectral reflectances for the 24 color checkers when applying the network method (2) to the observations using the iPhone 6s without using the local optimal dataset. The bold and broken curves indicate, respectively, the estimated and directly measured spectral reflectances for the 24 color checkers.</p>
Full article ">Figure 10
<p>Comparison of the average RMSEs between the proposed method and the other methods when using iPhone 8.</p>
Full article ">Figure 11
<p>Comparison of the average RMSEs between the proposed method and the other methods when using Huawei P10 lite.</p>
Full article ">
20 pages, 7687 KiB  
Article
Enhancing Surface Water Monitoring through Multi-Satellite Data-Fusion of Landsat-8/9, Sentinel-2, and Sentinel-1 SAR
by Alexis Declaro and Shinjiro Kanae
Remote Sens. 2024, 16(17), 3329; https://doi.org/10.3390/rs16173329 - 8 Sep 2024
Viewed by 524
Abstract
Long revisit intervals and cloud susceptibility have restricted the applicability of earth observation satellites in surface water studies. Integrating multiple satellites offers potential for more frequent observations, yet combining different satellite sources, particularly optical and SAR satellites, presents complexities. This research explores the [...] Read more.
Long revisit intervals and cloud susceptibility have restricted the applicability of earth observation satellites in surface water studies. Integrating multiple satellites offers potential for more frequent observations, yet combining different satellite sources, particularly optical and SAR satellites, presents complexities. This research explores the data-fusion potential and limitations of Landsat-8/9 Operational Land Imager (OLI), Sentinel-2 Multispectral Instrument (MSI), and Sentinel-1 Synthetic Aperture (SAR) satellites to enhance surface water monitoring. By focusing on segmented surface water images, we demonstrate that combining optical and SAR data is generally effective and straightforward using a simple statistical thresholding algorithm. Kappa coefficients(κ) ranging from 0.80 to 0.95 indicate very strong harmony for integration across reservoirs, lakes, and river environments. In vegetative environments, integration with S1SAR shows weak harmony, with κ values ranging from 0.27 to 0.45, indicating the need for further studies. Global revisit interval maps reveal significant improvement in median revisit intervals from 15.87 to 22.81 days using L8/9 alone, to 4.51 to 7.77 days after incorporating S2, and further to 3.48 to 4.62 days after adding S1SAR. Even during wet season months, multi-satellite fusion maintained the median revisit intervals to less than a week. Maximizing all available open-source earth observation satellites is integral for advancing studies requiring more frequent surface water observations, such as flood, inundation, and hydrological modeling. Full article
Show Figures

Figure 1

Figure 1
<p>Percentage decrease in satellite images after cloud cover and cloud shadow masking in year 2023 for (<b>a</b>) Landsat-8/9; (<b>b</b>) Sentinel-2. Each pixel is in 1500 m resolution.</p>
Full article ">Figure 2
<p>(<b>a</b>) Locations of the study areas; (<b>b</b>) segmented sample image of Congo River, one of the study areas, within the 0.30° × 0.30° bounding box.</p>
Full article ">Figure 3
<p>Optical (Landsat 8/9, Sentinel 2) and SAR (Sentinel 1) image-preprocessing workflow.</p>
Full article ">Figure 4
<p>(<b>a</b>) Sample density distribution of VH backscatter showing distinct water and non-water classes. (<b>b</b>) Sample paired heatmaps of MNDWI and VH backscatter.</p>
Full article ">Figure 5
<p>(<b>a</b>–<b>c</b>) Segmented surface water images for the Congo River from L9, S2, and S1SAR (blue = water, white = non-water, black = contamination); (<b>d</b>) Example kappa coefficient values for different satellite pairs in Congo River; (<b>e</b>) Overall kappa coefficient values across all study areas.</p>
Full article ">Figure 6
<p>Kappa coefficient boxplot for different satellite pairs, in terms of the different surface water environments covered in the study areas. Enclosed in the dashed lines are the kappa coefficient boxplots against S1SAR in paddy field areas.</p>
Full article ">Figure 7
<p>Satellite images of (<b>a</b>) California, USA; and (<b>b</b>) Indus River, Pakistan. From left to right: the images include Landsat false-color composites (combining bands B6, B5, and B4), followed by surface water segmented images from L9, S2, and S1SAR, respectively. In the surface water segmented images, blue: water, white: non-water.</p>
Full article ">Figure 8
<p>Global revisit interval maps: (<b>a</b>) Landsat 8/9 only; (<b>b</b>) after optical satellites data fusion: L9 + S2; (<b>c</b>) after SAR data fusion: L9 + S2 + S1SAR. Each pixel is in 1500 m resolution.</p>
Full article ">Figure 9
<p>Distribution progression of revisit interval values at the critical regions in the entire 2023. (<b>a</b>) L8/9; (<b>b</b>) after optical satellites data fusion: L8/9 + S2; (<b>c</b>) after SAR data fusion: L8/9 + S2 + S1SAR.</p>
Full article ">Figure 10
<p>Wet season map using ERA-5 Land monthly precipitation. Blue: DJF, Green: MAM, Yellow: JJA, Red: SON.</p>
Full article ">Figure 11
<p>Distribution progression of revisit interval values at the critical regions during their respective wet season months. (<b>a</b>) L8/9; (<b>b</b>) After optical satellites data fusion: L8/9 + S2; (<b>c</b>) After SAR data fusion: L8/9 + S2 + S1SAR.</p>
Full article ">Figure 12
<p>Heatmap of percentage change in (<b>a</b>) median revisit interval and (<b>b</b>) interquartile ranges (IQR) during wet season months relative to the entire year analysis previously. Negative values (shown in blue) indicate an improvement in median RI and IQR, while positive values (shown in red) indicate worsening of these metrics.</p>
Full article ">Figure 13
<p>Comparison of spatial and temporal resolution of the study with the existing global surface water datasets.</p>
Full article ">
21 pages, 20841 KiB  
Article
Snow Detection in Gaofen-1 Multi-Spectral Images Based on Swin-Transformer and U-Shaped Dual-Branch Encoder Structure Network with Geographic Information
by Yue Wu, Chunxiang Shi, Runping Shen, Xiang Gu, Ruian Tie, Lingling Ge and Shuai Sun
Remote Sens. 2024, 16(17), 3327; https://doi.org/10.3390/rs16173327 - 8 Sep 2024
Viewed by 456
Abstract
Snow detection is imperative in remote sensing for various applications, including climate change monitoring, water resources management, and disaster warning. Recognizing the limitations of current deep learning algorithms in cloud and snow boundary segmentation, as well as issues like detail snow information loss [...] Read more.
Snow detection is imperative in remote sensing for various applications, including climate change monitoring, water resources management, and disaster warning. Recognizing the limitations of current deep learning algorithms in cloud and snow boundary segmentation, as well as issues like detail snow information loss and mountainous snow omission, this paper presents a novel snow detection network based on Swin-Transformer and U-shaped dual-branch encoder structure with geographic information (SD-GeoSTUNet), aiming to address the above issues. Initially, the SD-GeoSTUNet incorporates the CNN branch and Swin-Transformer branch to extract features in parallel and the Feature Aggregation Module (FAM) is designed to facilitate the detail feature aggregation via two branches. Simultaneously, an Edge-enhanced Convolution (EeConv) is introduced to promote snow boundary contour extraction in the CNN branch. In particular, auxiliary geographic information, including altitude, longitude, latitude, slope, and aspect, is encoded in the Swin-Transformer branch to enhance snow detection in mountainous regions. Experiments conducted on Levir_CS, a large-scale cloud and snow dataset originating from Gaofen-1, demonstrate that SD-GeoSTUNet achieves optimal performance with the values of 78.08%, 85.07%, and 92.89% for IoU_s, F1_s, and MPA, respectively, leading to superior cloud and snow boundary segmentation and thin cloud and snow detection. Further, ablation experiments reveal that integrating slope and aspect information effectively alleviates the omission of snow detection in mountainous areas and significantly exhibits the best vision under complex terrain. The proposed model can be used for remote sensing data with geographic information to achieve more accurate snow extraction, which is conducive to promoting the research of hydrology and agriculture with different geospatial characteristics. Full article
(This article belongs to the Section Environmental Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Overview of the proposed SD-GeoSTUNet. Hereinafter referred to as CNN branch and Swin-T branch, respectively. (<b>a</b>) Overview of Decoder Layer. (<b>b</b>) Overview of Feature Aggregation Module (FAM). (<b>c</b>) Overview of Residual Layer.</p>
Full article ">Figure 2
<p>(<b>a</b>) Overview of Edge-enhanced Convolution (EeConv), including a vanilla convolution, a central difference convolution (CDC), an angular difference convolution (ADC), a horizontal difference convolution (HDC), and a vertical difference convolution (VDC). (<b>b</b>) the principle of vertical difference convolution (VDC).</p>
Full article ">Figure 3
<p>The global distribution of the images in the experiment. Base map from Cartopy.</p>
Full article ">Figure 4
<p>Segmentation results of cloud and snow under different module combinations. (<b>a</b>) RGB true color image, (<b>b</b>) Label, (<b>c</b>) Concatenation, (<b>d</b>) FAM, (<b>e</b>) FAM + EeConv. In (<b>b</b>–<b>e</b>), black, blue, and white pixels represent background, cloud, and snow, respectively. The scene of the first row is at the center of 109.6°E, 48.8°N, and the date is 17 October 2017. The scene of the second row is at the center of 86.6°E, 28.5°N, and the date is 30 November 2016.</p>
Full article ">Figure 5
<p>Detection results of snow in mountain regions under different geographic information feature combination ablation experiments, and enlarged views marked with blue, green, and red boxes. (<b>a</b>) RGB true color image. (<b>b</b>) Label. (<b>c</b>) Experiment 1 detection results. (<b>d</b>) Experiment 2 detection results. (<b>e</b>) Experiment 3 detection results. In (<b>b</b>–<b>e</b>), black and white pixels represent the background and snow, respectively. The scene is at the center of 128.9°E, 44.7°N, and the date is 27 January 2018.</p>
Full article ">Figure 6
<p>Except for the background, the detection results of clouds and snow coexistence and the enlarged views (marked with the red box in the figure). (<b>a</b>) RGB true color image. (<b>b</b>) Label. (<b>c</b>) PSPNet. (<b>d</b>) Segformer. (<b>e</b>) U-Net. (<b>f</b>) CDNetV2. (<b>g</b>) GeoInfoNet. (<b>h</b>) SD-GeoSTUNet. In (<b>b</b>–<b>h</b>), black, blue, and white pixels represent the background, cloud, and snow, respectively. The scene of the first row is at the center of 104.2°E, 31.3°N, and the date is 29 March 2018. The scene of the third row is at the center of 104.6°E, 33.0°N, and the date is 29 March 2018.</p>
Full article ">Figure 7
<p>Except for the background, the detection results of pure snow and the enlarged views (marked with the red box in the figure). (<b>a</b>) RGB true color image. (<b>b</b>) Label. (<b>c</b>) PSPNet. (<b>d</b>) Segformer. (<b>e</b>) U-Net. (<b>f</b>) CDNetV2. (<b>g</b>) GeoInfoNet. (<b>h</b>) SD-GeoSTUNet. In (<b>b</b>–<b>h</b>), black, blue, and white pixels represent the background, cloud, and snow, respectively. The scene of the first row is at the center of 128.6°E, 51.3°N, and the date is 4 January 2016. The scene of the third row is at the center of 133.7°E, 50.9°N, and the date is 4 February 2018.</p>
Full article ">Figure 8
<p>Except for the background, the detection results of pure cloud and the enlarged views (marked with the red box in the figure). (<b>a</b>) RGB true color image. (<b>b</b>) Label. (<b>c</b>) PSPNet. (<b>d</b>) Segformer. (<b>e</b>) U-Net. (<b>f</b>) CDNetV2. (<b>g</b>) GeoInfoNet. (<b>h</b>) SD-GeoSTUNet. In (<b>b</b>–<b>h</b>), black, blue, and white pixels represent the background, cloud, and snow, respectively. This scene is at the center of 4.1°W, 54.2°N, and the date is 13 July 2016.</p>
Full article ">
43 pages, 24204 KiB  
Article
Support Vector Machine Algorithm for Mapping Land Cover Dynamics in Senegal, West Africa, Using Earth Observation Data
by Polina Lemenkova
Earth 2024, 5(3), 420-462; https://doi.org/10.3390/earth5030024 - 6 Sep 2024
Viewed by 494
Abstract
This paper addresses the problem of mapping land cover types in Senegal and recognition of vegetation systems in the Saloum River Delta on the satellite images. Multi-seasonal landscape dynamics were analyzed using Landsat 8-9 OLI/TIRS images from 2015 to 2023. Two image classification [...] Read more.
This paper addresses the problem of mapping land cover types in Senegal and recognition of vegetation systems in the Saloum River Delta on the satellite images. Multi-seasonal landscape dynamics were analyzed using Landsat 8-9 OLI/TIRS images from 2015 to 2023. Two image classification methods were compared, and their performance was evaluated in the GRASS GIS software (version 8.4.0, creator: GRASS Development Team, original location: Champaign, Illinois, USA, currently multinational project) by means of unsupervised classification using the k-means clustering algorithm and supervised classification using the Support Vector Machine (SVM) algorithm. The land cover types were identified using machine learning (ML)-based analysis of the spectral reflectance of the multispectral images. The results based on the processed multispectral images indicated a decrease in savannas, an increase in croplands and agricultural lands, a decline in forests, and changes to coastal wetlands, including mangroves with high biodiversity. The practical aim is to describe a novel method of creating land cover maps using RS data for each class and to improve accuracy. We accomplish this by calculating the areas occupied by 10 land cover classes within the target area for six consecutive years. Our results indicate that, in comparing the performance of the algorithms, the SVM classification approach increased the accuracy, with 98% of pixels being stable, which shows qualitative improvements in image classification. This paper contributes to the natural resource management and environmental monitoring of Senegal, West Africa, through advanced cartographic methods applied to remote sensing of Earth observation data. Full article
Show Figures

Figure 1

Figure 1
<p>Study area with segments of the Landsat images shown on a topographic map of Senegal. Software: GMT. Map source: Author.</p>
Full article ">Figure 2
<p>Data capture of Landsat images from the USGS EarthExplorer repository.</p>
Full article ">Figure 3
<p>Landsat images in RGB colors covering the Cape Verde Peninsula region and Saloum River Delta, West Senegal, in February: (<b>a</b>) 2015; (<b>b</b>) 2018; (<b>c</b>) 2020; (<b>d</b>) 2021; (<b>e</b>) 2022; (<b>f</b>) 2023.</p>
Full article ">Figure 4
<p>Workflow scheme illustrating the data and the main methodological steps. Software: R version 4.3.3, library DiagrammeR version 1.0.11. Diagram source: Author.</p>
Full article ">Figure 5
<p>False color composites of the Landsat 8-9 OLI/TIRS images with vegetation colored red, using a combination of spectral bands 5 (Near Infrared (NIR)), 4 (Red), and 3 (Green) of the Landsat OLI sensor covering the study area in the Cape Verde Peninsula region and Saloum River Delta, West Senegal, using February scenes: (<b>a</b>) 2015; (<b>b</b>) 2018; (<b>c</b>) 2020; (<b>d</b>) 2021; (<b>e</b>) 2022; (<b>f</b>) 2023.</p>
Full article ">Figure 6
<p>Land cover types in Senegal according to the FAO classification scheme. Software: QGIS v. 3.22. Map source: Author.</p>
Full article ">Figure 7
<p>Classification of the Landsat images from 2020 covering the Cape Verde Peninsula region and the Saloum River Delta, West Senegal: (<b>a</b>) 2015; (<b>b</b>) 2018; (<b>c</b>) 2020; (<b>d</b>) 2021; (<b>e</b>) 2022; (<b>f</b>) 2023.</p>
Full article ">Figure 8
<p>Results of the Support Vector Machine (SVM)-based classification of the Landsat images covering the Cape Verde Peninsula region and Saloum River Delta, West Senegal: (<b>a</b>) February 2015; (<b>b</b>) February 2018; (<b>c</b>) February 2020; (<b>d</b>) February 2021; (<b>e</b>) February 2022; (<b>f</b>) February 2023.</p>
Full article ">Figure 9
<p>Accuracy evaluated based on the pixel confidence levels with rejection probability values for the Landsat images covering the Cape Verde Peninsula region and Saloum River Delta, West Senegal: (<b>a</b>) 2015; (<b>b</b>) 2018; (<b>c</b>) 2020; (<b>d</b>) 2021; (<b>e</b>) 2022;(<b>f</b>) 2023.</p>
Full article ">
19 pages, 12043 KiB  
Article
Collection of a Hyperspectral Atmospheric Cloud Dataset and Enhancing Pixel Classification through Patch-Origin Embedding
by Hua Yan, Rachel Zheng, Shivaji Mallela, Randy Russell and Olcay Kursun
Remote Sens. 2024, 16(17), 3315; https://doi.org/10.3390/rs16173315 - 6 Sep 2024
Viewed by 392
Abstract
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset [...] Read more.
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset of hyperspectral sky images captured by a Resonon PIKA XC2 camera. The camera records images using 462 spectral bands, ranging from 400 to 1000 nm, with a spectral resolution of 1.9 nm. Our preliminary/unlabeled dataset comprised 33 parent hyperspectral images (HSI), each a substantial unlabeled image measuring 4402-by-1600 pixels. With the meteorological expertise within our team, we manually labeled pixels by extracting 10 to 20 sample patches from each parent image, each patch consisting of a 50-by-50 pixel field. This process yielded a collection of 444 patches, each categorically labeled into one of seven cloud and sky condition categories. To embed the inherent data structure while classifying individual pixels, we introduced an innovative technique to boost classification accuracy by incorporating patch-specific information into each pixel’s feature vector. The posterior probabilities generated by these classifiers, which capture the unique attributes of each patch, were subsequently concatenated with the pixel’s original spectral data to form an augmented feature vector. We then applied a final classifier to map the augmented vectors to the seven cloud/sky categories. The results compared favorably to the baseline model devoid of patch-origin embedding, showing that incorporating the spatial context along with the spectral information inherent in hyperspectral images enhances the classification accuracy in hyperspectral cloud classification. The dataset is available on IEEE DataPort. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing and Geodata)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Working principle of Resonon Hyperspectral Imager (inspired from [<a href="#B11-remotesensing-16-03315" class="html-bibr">11</a>]).</p>
Full article ">Figure 2
<p>Resonon Pika XC2 camera mounted on a tilt head and attached to a rotational stage that captures sky images covering a 90-degree range in azimuth [<a href="#B12-remotesensing-16-03315" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>A sample parent image with some sample patches marked in red squares.</p>
Full article ">Figure 4
<p>Sample patch examples for each cloud/sky category. (<b>a</b>) Dense dark cumuliform clouds (c01). (<b>b</b>) Dense bright cumuliform clouds (c02). (<b>c</b>) Semi-transparent cumuliform clouds (c03). (<b>d</b>) Dense cirroform clouds (c04). (<b>e</b>) Semi-transparent cirroform clouds (c05). (<b>f</b>) Low aerosol clear sky (c06). (<b>g</b>) Moderate/high aerosol clear sky (c07).</p>
Full article ">Figure 5
<p>Spectra of three exemplary pixels obtained from three separate cumuliform cloud patches, each with 462 bands.</p>
Full article ">Figure 6
<p>Normalized version of the cumuliform spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 7
<p>Spectra of three exemplary pixels obtained from three separate cirroform cloud patches, each with 462 bands.</p>
Full article ">Figure 8
<p>Normalized versions of the cirroform spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 9
<p>Spectra of three exemplary pixels obtained from three separate clear-sky patches, each with 462 bands.</p>
Full article ">Figure 10
<p>Normalized version of the clear-sky spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 11
<p>Notations used for the origin of a parent image and the location and size of a patch in the parent image.</p>
Full article ">Figure 12
<p>Parent image file naming convention of the dataset [<a href="#B10-remotesensing-16-03315" class="html-bibr">10</a>].</p>
Full article ">Figure 13
<p>Patch image file naming convention of the dataset [<a href="#B10-remotesensing-16-03315" class="html-bibr">10</a>].</p>
Full article ">Figure 14
<p>CNN architecture used for patch classification using the RGB renders.</p>
Full article ">Figure 15
<p>CNN architecture used for feature extraction. Outputs from the network’s GlobalMaxPooling layer serve as features to downstream classifiers, either LR or RF.</p>
Full article ">Figure 16
<p>Classification results on sample parent images (from [<a href="#B21-remotesensing-16-03315" class="html-bibr">21</a>]), which are large images not included in the training dataset. The results demonstrate the performance of the classification model on new, unseen data.</p>
Full article ">
18 pages, 20185 KiB  
Article
Soil Salinity Prediction in an Arid Area Based on Long Time-Series Multispectral Imaging
by Wenju Zhao, Zhaozhao Li, Haolin Li, Xing Li and Pengtao Yang
Agriculture 2024, 14(9), 1539; https://doi.org/10.3390/agriculture14091539 - 6 Sep 2024
Viewed by 296
Abstract
Traditional soil salinity measurement methods are generally complex and labor-intensive, restricting the long-term monitoring of soil salinity, particularly in arid areas. In this context, the soil salt content (SSC) data from farms in the Heihe River Basin in Northwest China were collected in [...] Read more.
Traditional soil salinity measurement methods are generally complex and labor-intensive, restricting the long-term monitoring of soil salinity, particularly in arid areas. In this context, the soil salt content (SSC) data from farms in the Heihe River Basin in Northwest China were collected in three consecutive years (2021, 2022, and 2023). In addition, the spectral reflectance and texture features of different sampling sites in the study area were extracted from long-term unmanned aerial vehicle (UAV) multispectral images to replace the red and near-infrared bands with a newly introduced red edge band. The spectral index was calculated in this study before using four sensitive variable combinations to predict soil salt contents. A Pearson correlation analysis was performed in this study to screen 57 sensitive features. In addition, 36 modeling scenarios were conducted based on the Extreme Gradient Boosting (XGBoost Implemented using R language 4.3.1), Backpropagation Neural Network (BPNN), and Random Forest (RF) algorithms. The most optimal algorithms for predicting the soil salt contents in farmland located in the Heihe River Basin, in the arid region of Northwest China, were determined. The results showed a higher prediction accuracy for the XGBoost algorithm than the RF and BPNN algorithms, accurately reflecting the actual soil salt contents in the arid area. On the other hand, the most accurate predicted soil salt contents were obtained in 2023 using the XGBoost algorithm, with coefficient of determination (R2), root mean square error (RMSE), and mean absolute error (MAE) ranges of 0.622–0.820, 0.086–0.157, and 0.078–0.134, respectively, whereas the most stable prediction results were obtained using the collected data in 2022. From the perspective of different sensitive variable input combinations, the implementation of the XGBoost algorithm using the spectral index–spectral reflectance–texture feature input combination resulted in comparatively higher prediction accuracies than those of the other variable combinations in 2022 and 2023. Specifically, the R2, RMSE, and MAE values obtained using the spectral index–spectral reflectance–texture feature input combination were 0.674, 0.133, and 0.086 in 2022 and 0.820, 0.165, and 0.134 in 2023, respectively. Therefore, our results demonstrated that the spectral index–spectral reflectance–texture feature was the optimal sensitive variable input combination for the machine learning algorithms, of which the XGBoost algorithm is the most optimal model for predicting soil salt contents. The results of this study provide a theoretical basis for the rapid and accurate prediction of soil salinity in arid areas. Full article
(This article belongs to the Section Agricultural Soils)
Show Figures

Figure 1

Figure 1
<p>Geographic location of the study area.</p>
Full article ">Figure 1 Cont.
<p>Geographic location of the study area.</p>
Full article ">Figure 2
<p>Data collection and processing. (<b>a</b>) DJI Phantom 4 collects multi-spectral images. (<b>b</b>) Measuring salt content of soil samples. (<b>c</b>) Alfalfa.</p>
Full article ">Figure 3
<p>Correlations of the observed soil salt contents in the different years with the spectral bands and texture features.</p>
Full article ">Figure 4
<p>Comparison of the measured and predicted soil salt contents.</p>
Full article ">Figure 4 Cont.
<p>Comparison of the measured and predicted soil salt contents.</p>
Full article ">Figure 5
<p>Evaluation of the prediction accuracies of the different machine learning algorithms.</p>
Full article ">
11 pages, 11077 KiB  
Article
Forage Height and Above-Ground Biomass Estimation by Comparing UAV-Based Multispectral and RGB Imagery
by Hongquan Wang, Keshav D. Singh, Hari P. Poudel, Manoj Natarajan, Prabahar Ravichandran and Brandon Eisenreich
Sensors 2024, 24(17), 5794; https://doi.org/10.3390/s24175794 - 6 Sep 2024
Viewed by 343
Abstract
Crop height and biomass are the two important phenotyping traits to screen forage population types at local and regional scales. This study aims to compare the performances of multispectral and RGB sensors onboard drones for quantitative retrievals of forage crop height and biomass [...] Read more.
Crop height and biomass are the two important phenotyping traits to screen forage population types at local and regional scales. This study aims to compare the performances of multispectral and RGB sensors onboard drones for quantitative retrievals of forage crop height and biomass at very high resolution. We acquired the unmanned aerial vehicle (UAV) multispectral images (MSIs) at 1.67 cm spatial resolution and visible data (RGB) at 0.31 cm resolution and measured the forage height and above-ground biomass over the alfalfa (Medicago sativa L.) breeding trials in the Canadian Prairies. (1) For height estimation, the digital surface model (DSM) and digital terrain model (DTM) were extracted from MSI and RGB data, respectively. As the resolution of the DTM is five times less than that of the DSM, we applied an aggregation algorithm to the DSM to constrain the same spatial resolution between DSM and DTM. The difference between DSM and DTM was computed as the canopy height model (CHM), which was at 8.35 cm and 1.55 cm for MSI and RGB data, respectively. (2) For biomass estimation, the normalized difference vegetation index (NDVI) from MSI data and excess green (ExG) index from RGB data were analyzed and regressed in terms of ground measurements, leading to empirical models. The results indicate better performance of MSI for above-ground biomass (AGB) retrievals at 1.67 cm resolution and better performance of RGB data for canopy height retrievals at 1.55 cm. Although the retrieved height was well correlated with the ground measurements, a significant underestimation was observed. Thus, we developed a bias correction function to match the retrieval with the ground measurements. This study provides insight into the optimal selection of sensor for specific targeted vegetation growth traits in a forage crop. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Figure 1
<p>Study site with UAV multispectral imagery on 13 October 2022.</p>
Full article ">Figure 2
<p>UAV multispectral and RGB imaging analysis workflow to estimate canopy height and above-ground biomass.</p>
Full article ">Figure 3
<p>(<b>a</b>) DSM and (<b>b</b>) DTM from multispectral imagery, in comparison with (<b>c</b>) DSM and (<b>d</b>) DTM from RGB imagery.</p>
Full article ">Figure 4
<p>Spatial distribution of (<b>a</b>) MSI NDVI and (<b>b</b>) RGB ExG on 13 October 2022.</p>
Full article ">Figure 4 Cont.
<p>Spatial distribution of (<b>a</b>) MSI NDVI and (<b>b</b>) RGB ExG on 13 October 2022.</p>
Full article ">Figure 5
<p>Canopy height model (CHM) from (<b>a</b>) MSI and (<b>b</b>) RGB imagery.</p>
Full article ">Figure 6
<p>Validations of the retrieved height from MSI by setting the minimum threshold as (<b>a</b>) 5 cm and (<b>b</b>) 10 cm, in terms of ground measurements.</p>
Full article ">Figure 7
<p>Validations of the retrieved height from RGB data: (<b>a</b>) original retrieval; (<b>b</b>) bias correction for height estimation.</p>
Full article ">Figure 8
<p>Relationships between UAV-based mean NDVI values for each plot on 13 October and measured fresh biomass on (<b>a</b>) 8 August and (<b>b</b>) 12 September 2022. Each plot was designed with an area of 6.27 m<sup>2</sup>.</p>
Full article ">Figure 9
<p>Relationships between UAV-based mean ExG values for each plot (black dots) on 13 October and measured fresh biomass on (<b>a</b>) 8 August and (<b>b</b>) 12 September 2022.</p>
Full article ">Figure 10
<p>UAV multispectral-retrieved (<b>a</b>) biomass and (<b>b</b>) bias-corrected biomass on 12 September 2022.</p>
Full article ">
Back to TopTop