[go: up one dir, main page]

 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (712)

Search Parameters:
Keywords = hyperspectral imagery

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 12043 KiB  
Article
Collection of a Hyperspectral Atmospheric Cloud Dataset and Enhancing Pixel Classification through Patch-Origin Embedding
by Hua Yan, Rachel Zheng, Shivaji Mallela, Randy Russell and Olcay Kursun
Remote Sens. 2024, 16(17), 3315; https://doi.org/10.3390/rs16173315 - 6 Sep 2024
Viewed by 350
Abstract
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset [...] Read more.
Hyperspectral cameras collect detailed spectral information at each image pixel, contributing to the identification of image features. The rich spectral content of hyperspectral imagery has led to its application in diverse fields of study. This study focused on cloud classification using a dataset of hyperspectral sky images captured by a Resonon PIKA XC2 camera. The camera records images using 462 spectral bands, ranging from 400 to 1000 nm, with a spectral resolution of 1.9 nm. Our preliminary/unlabeled dataset comprised 33 parent hyperspectral images (HSI), each a substantial unlabeled image measuring 4402-by-1600 pixels. With the meteorological expertise within our team, we manually labeled pixels by extracting 10 to 20 sample patches from each parent image, each patch consisting of a 50-by-50 pixel field. This process yielded a collection of 444 patches, each categorically labeled into one of seven cloud and sky condition categories. To embed the inherent data structure while classifying individual pixels, we introduced an innovative technique to boost classification accuracy by incorporating patch-specific information into each pixel’s feature vector. The posterior probabilities generated by these classifiers, which capture the unique attributes of each patch, were subsequently concatenated with the pixel’s original spectral data to form an augmented feature vector. We then applied a final classifier to map the augmented vectors to the seven cloud/sky categories. The results compared favorably to the baseline model devoid of patch-origin embedding, showing that incorporating the spatial context along with the spectral information inherent in hyperspectral images enhances the classification accuracy in hyperspectral cloud classification. The dataset is available on IEEE DataPort. Full article
(This article belongs to the Special Issue Deep Learning for Remote Sensing and Geodata)
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Working principle of Resonon Hyperspectral Imager (inspired from [<a href="#B11-remotesensing-16-03315" class="html-bibr">11</a>]).</p>
Full article ">Figure 2
<p>Resonon Pika XC2 camera mounted on a tilt head and attached to a rotational stage that captures sky images covering a 90-degree range in azimuth [<a href="#B12-remotesensing-16-03315" class="html-bibr">12</a>].</p>
Full article ">Figure 3
<p>A sample parent image with some sample patches marked in red squares.</p>
Full article ">Figure 4
<p>Sample patch examples for each cloud/sky category. (<b>a</b>) Dense dark cumuliform clouds (c01). (<b>b</b>) Dense bright cumuliform clouds (c02). (<b>c</b>) Semi-transparent cumuliform clouds (c03). (<b>d</b>) Dense cirroform clouds (c04). (<b>e</b>) Semi-transparent cirroform clouds (c05). (<b>f</b>) Low aerosol clear sky (c06). (<b>g</b>) Moderate/high aerosol clear sky (c07).</p>
Full article ">Figure 5
<p>Spectra of three exemplary pixels obtained from three separate cumuliform cloud patches, each with 462 bands.</p>
Full article ">Figure 6
<p>Normalized version of the cumuliform spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 7
<p>Spectra of three exemplary pixels obtained from three separate cirroform cloud patches, each with 462 bands.</p>
Full article ">Figure 8
<p>Normalized versions of the cirroform spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 9
<p>Spectra of three exemplary pixels obtained from three separate clear-sky patches, each with 462 bands.</p>
Full article ">Figure 10
<p>Normalized version of the clear-sky spectra of the three pixels and the class average of the normalized spectra.</p>
Full article ">Figure 11
<p>Notations used for the origin of a parent image and the location and size of a patch in the parent image.</p>
Full article ">Figure 12
<p>Parent image file naming convention of the dataset [<a href="#B10-remotesensing-16-03315" class="html-bibr">10</a>].</p>
Full article ">Figure 13
<p>Patch image file naming convention of the dataset [<a href="#B10-remotesensing-16-03315" class="html-bibr">10</a>].</p>
Full article ">Figure 14
<p>CNN architecture used for patch classification using the RGB renders.</p>
Full article ">Figure 15
<p>CNN architecture used for feature extraction. Outputs from the network’s GlobalMaxPooling layer serve as features to downstream classifiers, either LR or RF.</p>
Full article ">Figure 16
<p>Classification results on sample parent images (from [<a href="#B21-remotesensing-16-03315" class="html-bibr">21</a>]), which are large images not included in the training dataset. The results demonstrate the performance of the classification model on new, unseen data.</p>
Full article ">
24 pages, 8893 KiB  
Article
Assessing Data Preparation and Machine Learning for Tree Species Classification Using Hyperspectral Imagery
by Wenge Ni-Meister, Anthony Albanese and Francesca Lingo
Remote Sens. 2024, 16(17), 3313; https://doi.org/10.3390/rs16173313 - 6 Sep 2024
Viewed by 369
Abstract
Tree species classification using hyperspectral imagery shows incredible promise in developing a large-scale, high-resolution model for identifying tree species, providing unprecedented details on global tree species distribution. Many questions remain unanswered about the best practices for creating a global, general hyperspectral tree species [...] Read more.
Tree species classification using hyperspectral imagery shows incredible promise in developing a large-scale, high-resolution model for identifying tree species, providing unprecedented details on global tree species distribution. Many questions remain unanswered about the best practices for creating a global, general hyperspectral tree species classification model. This study aims to address three key issues in creating a hyperspectral species classification model. We assessed the effectiveness of three data-labeling methods to create training data, three data-splitting methods for training/validation/testing, and machine-learning and deep-learning (including semi-supervised deep-learning) models for tree species classification using hyperspectral imagery at National Ecological Observatory Network (NEON) Sites. Our analysis revealed that the existing data-labeling method using the field vegetation structure survey performed reasonably well. The random tree data-splitting technique was the most efficient method for both intra-site and inter-site classifications to overcome the impact of spatial autocorrelation to avoid the potential to create a locally overfit model. Deep learning consistently outperformed random forest classification; both semi-supervised and supervised deep-learning models displayed the most promising results in creating a general taxa-classification model. This work has demonstrated the possibility of developing tree-classification models that can identify tree species from outside their training area and that semi-supervised deep learning may potentially utilize the untapped terabytes of unlabeled forest imagery. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Locations of NEON study sites used in this project.</p>
Full article ">Figure 2
<p>Locations of vegetation sampling plots from the NIWO site.</p>
Full article ">Figure 3
<p>Illustration of data sources used from each NEON site. From left to right: 10 cm RGB imagery, 1 m true-color composite from hyperspectral imagery, and 1 m canopy-height model (CHM) derived from lidar, all collected in August 2020. Survey tree locations are indicated in red.</p>
Full article ">Figure 4
<p>Overview of experimental workflow.</p>
Full article ">Figure 5
<p>Mean hyperspectral reflectance values for a study plot at the NIWO site before and after performing a simple de-noising operation. Bands with consistently low or noisy values were filtered out from further processing and analysis.</p>
Full article ">Figure 6
<p>Results from all three annotation methods used on the NIWO_014 study plot produced slightly different results. This is demonstrated well with the isolated tree in the middle right of the image. The filtering algorithm removes this tree location due to the difference between CHM and surveyed tree height. The snapping algorithm changes its location, and the Scholl algorithm keeps this location unaltered. Original tree locations from the NEON woody vegetation survey are on the upper left.</p>
Full article ">Figure 7
<p>Network designs for deep-learning models utilized. The pre-training model utilizes the swapping assignments between views (SwAV) unsupervised clustering architecture to find clusters within the data. The encoder from the pre-training model is then used as a backbone for the semi-supervised model to the supervised multi-layer perception (MLP) learning model. At the same time, the supervised model is initialized with no pre-training or prior exposure to data to the MLP model.</p>
Full article ">Figure 8
<p>Mean hyperspectral reflectance from 380 to 2510 nm, extracted from all polygons with half the maximum crown diameter at the NEON NIWO site for each of the dominant tree species: ABLAL (Subalpine fir), PICOL (Lodgepole pine), PIEN (Engelmann spruce), and PIFL2 (Limber pine).</p>
Full article ">Figure 9
<p>Results from testing different label-selection method algorithms at the NIWO site. Five trials were run for each set of parameters, and the median overall accuracy amongst those trials was plotted. Minimum and maximum accuracy values from trials are indicated with error bars.</p>
Full article ">Figure 10
<p>Results from testing transferability of trained models using the random pixel (labeled as Pixel), plot-divide (labeled as Plot), and random tree (labeled as Tree) data-splitting methods for training/validation/testing. All models were initially trained on data from the NIWO site and then tested on data from the RMNP site. Minimum and maximum accuracy values from trials are indicated with error bars.</p>
Full article ">Figure 11
<p>Results for deep-learning classification models with and without pre-training. The color of the bar indicates three cases: pre-training was not performed (purple), performed on the NIWO site (orange), or performed on the STEI site (green). The top row results were trained and classified on the NIWO site, while the bottom row results were trained on the NIWO site and classified on the RMNP site.</p>
Full article ">
32 pages, 11057 KiB  
Article
Monitoring Helicoverpa armigera Damage with PRISMA Hyperspectral Imagery: First Experience in Maize and Comparison with Sentinel-2 Imagery
by Fruzsina Enikő Sári-Barnácz, Mihály Zalai, Gábor Milics, Mariann Tóthné Kun, János Mészáros, Mátyás Árvai and József Kiss
Remote Sens. 2024, 16(17), 3235; https://doi.org/10.3390/rs16173235 - 31 Aug 2024
Viewed by 610
Abstract
The cotton bollworm (CBW) poses a significant risk to maize crops worldwide. This study investigated whether hyperspectral satellites offer an accurate evaluation method for monitoring maize ear damage caused by CBW larvae. The study analyzed the records of maize ear damage for four [...] Read more.
The cotton bollworm (CBW) poses a significant risk to maize crops worldwide. This study investigated whether hyperspectral satellites offer an accurate evaluation method for monitoring maize ear damage caused by CBW larvae. The study analyzed the records of maize ear damage for four maize fields in Southeast Hungary, Csongrád-Csanád County, in 2021. The performance of Sentinel-2 bands, PRISMA bands, and synthesized Sentinel-2 bands was compared using linear regression, partial least squares regression (PLSR), and two-band vegetation index (TBVI) methods. The best newly developed indices derived from the TBVI method were compared with existing vegetation indices. In mid-early grain maize fields, narrow bands of PRISMA generally performed better than wide bands, unlike in sweet maize fields, where the Sentinel-2 bands performed better. In grain maize fields, the best index was the normalized difference of λA = 571 and λB = 2276 (R2 = 0.33–0.54, RMSE 0.06–0.05), while in sweet maize fields, the best-performing index was the normalized difference of green (B03) and blue (B02) Sentinel-2 bands (R2 = 0.54–0.72, RMSE 0.02). The findings demonstrate the advantages and constraints of remote sensing for plant protection and pest monitoring. Full article
(This article belongs to the Special Issue Advancements in Remote Sensing for Sustainable Agriculture)
Show Figures

Figure 1

Figure 1
<p>Selected sampling zones in sweet maize (Nm1 and Nm2, colored red) and grain maize fields (Nm5 and Kd, colored yellow) in Southeast Hungary. The red point in Hungary’s blank map represents the location of the fields. The sampling zones were selected based on Normalized Difference Vegetation (NDVI) calculated from Sentinel 2 images collected on 30 July 2021. The background image is a Sentinel-2 true-color image (with 10 m spatial resolution) collected on 30 July 2021.</p>
Full article ">Figure 2
<p>Daily temperature (minimum, maximum, average) and sum of precipitation of the maize-growing season and weekly average catches of male adult cotton bollworms.</p>
Full article ">Figure 3
<p>Average reflectance of sweet and grain maize fields in the spectral range acquired by PRISMA on two dates and its additional imagery processing: the spectral ranges, where the reflectance was close to zero (Reflectance of λ less than 0.013), were omitted. Omitted spectral ranges are denoted by gray color, and their boundary wavelengths are given.</p>
Full article ">Figure 4
<p>Workflow of the analysis of different satellite imagery to determine suitability for monitoring maize ear damage of cotton bollworm larvae. The workflow consisted of two main parts: data acquisition (grey background) and statistical analysis (light-yellow background). Operations associated with the various satellites were identified by specific colors: Sentinel-2 (purple), PRISMA (red), and Synthetic Sentinel (green).</p>
Full article ">Figure 5
<p>Larval damage of cotton bollworm to sampling zones of each field. The width of the violin plots show the density of damage percentages of sampling zones of a maize field.</p>
Full article ">Figure 6
<p>R<sup>2</sup> of linear regression between grain maize ear damage by CBW larvae and single bands of Sentinel-2, Synthetized Sentinel based on PRISMA bands, and best-performing single bands of PRISMA.</p>
Full article ">Figure 7
<p>R<sup>2</sup> of linear regression between sweet maize ear damage by CBW larvae and single bands of Sentinel-2, Synthetized Sentinel based on PRISMA bands, and best-performing single bands of PRISMA.</p>
Full article ">Figure 8
<p>Response of Sentinel-2 spectral bands (<b>A</b>) and Synthetized Sentinel spectral bands (<b>B</b>), as defined by PLSR loadings, to sweet maize and grain maize ear damage caused by cotton bollworm larvae.</p>
Full article ">Figure 9
<p>Response of PRISMA-based spectral regions’ reflectance (as defined by PLRS loadings) to sweet maize and grain maize ear damage caused by cotton bollworm larvae.</p>
Full article ">Figure 10
<p>λ–λ plots expressing the correspondence (R<sup>2</sup>) of Kd grain maize field’s TBVIs (based on Sentinel 2 (<b>A</b>), Synthetized Sentinel (<b>B</b>), and PRISMA (<b>C</b>) satellites’ bands) to cotton bollworm larval ear damage.</p>
Full article ">Figure 11
<p>Cross-sensor agreement between the different vegetation indices of all 20x20m zones of Kd grain maize field (blue dots) derived from PRISMA and Sentinel-2 bands (based on imagery acquired on 30 July 2021). The black dotted lines are indicative of the trendlines, while the orange lines represent diagonals (perfect agreement).</p>
Full article ">Figure 12
<p>Existing vegetation index maps of a Kd grain maize field derived from PRISMA and Sentinel-2 images (imagery acquired on 30 July 2021).</p>
Full article ">Figure 13
<p>Newly developed two-band vegetation index maps of a Kd grain maize field derived from PRISMA and Sentinel-2 images (imagery acquired on 30 July 2021).</p>
Full article ">
20 pages, 24086 KiB  
Article
Clustering Hyperspectral Imagery via Sparse Representation Features of the Generalized Orthogonal Matching Pursuit
by Wenqi Guo, Xu Xu, Xiaoqiang Xu, Shichen Gao and Zibu Wu
Remote Sens. 2024, 16(17), 3230; https://doi.org/10.3390/rs16173230 - 31 Aug 2024
Viewed by 241
Abstract
This study focused on improving the clustering performance of hyperspectral imaging (HSI) by employing the Generalized Orthogonal Matching Pursuit (GOMP) algorithm for feature extraction. Hyperspectral remote sensing imaging technology, which is crucial in various fields like environmental monitoring and agriculture, faces challenges due [...] Read more.
This study focused on improving the clustering performance of hyperspectral imaging (HSI) by employing the Generalized Orthogonal Matching Pursuit (GOMP) algorithm for feature extraction. Hyperspectral remote sensing imaging technology, which is crucial in various fields like environmental monitoring and agriculture, faces challenges due to its high dimensionality and complexity. Supervised learning methods require extensive data and computational resources, while clustering, an unsupervised method, offers a more efficient alternative. This research presents a novel approach using GOMP to enhance clustering performance in HSI. The GOMP algorithm iteratively selects multiple dictionary elements for sparse representation, which makes it well-suited for handling complex HSI data. The proposed method was tested on two publicly available HSI datasets and evaluated in comparison with other methods to demonstrate its effectiveness in enhancing clustering performance. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>The scene images of two datasets used in our experiment.</p>
Full article ">Figure 2
<p>GOMP-based sparse representation and clustering workflow for hyperspectral images. The process involves three main stages: (<b>a</b>) Dictionary learning stage, where the goal is to minimize reconstruction error and produce a compact representation of hyperspectral data. (<b>b</b>) GOMP sparse coding stage, where dictionary atoms are greedily selected during each iteration to reduce residuals and achieve sparse data representation. (<b>c</b>) Clustering analysis stage, where the sparsely coded data are applied to clustering methods such as K-means, hierarchical clustering, and spectral clustering to evaluate the enhancement in clustering performance achieved by the GOMP.</p>
Full article ">Figure 3
<p>T-SNE visualization of features from the Pavia University dataset, where different colors indicate ground truth classes: (<b>a</b>) Original features. (<b>b</b>) PCA-processed features. (<b>c</b>) GOMP-processed features.</p>
Full article ">Figure 4
<p>T-SNE visualization of features from the Salinas Dataset, where different colors indicate ground truth classes: (<b>a</b>) Original features. (<b>b</b>) PCA-processed features. (<b>c</b>) GOMP-processed features.</p>
Full article ">Figure 5
<p>The clustering maps of Salinas dataset by different clustering algorithms under different feature extraction techniques.</p>
Full article ">Figure 6
<p>The clustering map of Salinas dataset by K-means clustering algorithm under different feature extraction techniques.</p>
Full article ">Figure 7
<p>The clustering map of Salinas dataset by spectral clustering algorithm under different feature extraction techniques.</p>
Full article ">Figure 8
<p>The clustering maps of Pavia University dataset by different clustering algorithms under different feature extraction techniques.</p>
Full article ">Figure 9
<p>The clustering maps of Pavia University dataset by K-means clustering algorithm under different feature extraction techniques.</p>
Full article ">Figure 10
<p>The clustering maps of Pavia University dataset by hierarchical clustering algorithm under different feature extraction techniques.</p>
Full article ">Figure 11
<p>Classification accuracy comparison across different classifiers and feature extraction methods.</p>
Full article ">
23 pages, 39394 KiB  
Article
Fine-Scale Mangrove Species Classification Based on UAV Multispectral and Hyperspectral Remote Sensing Using Machine Learning
by Yuanzheng Yang, Zhouju Meng, Jiaxing Zu, Wenhua Cai, Jiali Wang, Hongxin Su and Jian Yang
Remote Sens. 2024, 16(16), 3093; https://doi.org/10.3390/rs16163093 - 22 Aug 2024
Viewed by 809
Abstract
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental [...] Read more.
Mangrove ecosystems play an irreplaceable role in coastal environments by providing essential ecosystem services. Diverse mangrove species have different functions due to their morphological and physiological characteristics. A precise spatial distribution map of mangrove species is therefore crucial for biodiversity maintenance and environmental conservation of coastal ecosystems. Traditional satellite data are limited in fine-scale mangrove species classification due to low spatial resolution and less spectral information. This study employed unmanned aerial vehicle (UAV) technology to acquire high-resolution multispectral and hyperspectral mangrove forest imagery in Guangxi, China. We leveraged advanced algorithms, including RFE-RF for feature selection and machine learning models (Adaptive Boosting (AdaBoost), eXtreme Gradient Boosting (XGBoost), Random Forest (RF), and Light Gradient Boosting Machine (LightGBM)), to achieve mangrove species mapping with high classification accuracy. The study assessed the classification performance of these four machine learning models for two types of image data (UAV multispectral and hyperspectral imagery), respectively. The results demonstrated that hyperspectral imagery had superiority over multispectral data by offering enhanced noise reduction and classification performance. Hyperspectral imagery produced mangrove species classification with overall accuracy (OA) higher than 91% across the four machine learning models. LightGBM achieved the highest OA of 97.15% and kappa coefficient (Kappa) of 0.97 based on hyperspectral imagery. Dimensionality reduction and feature extraction techniques were effectively applied to the UAV data, with vegetation indices proving to be particularly valuable for species classification. The present research underscored the effectiveness of UAV hyperspectral images using machine learning models for fine-scale mangrove species classification. This approach has the potential to significantly improve ecological management and conservation strategies, providing a robust framework for monitoring and safeguarding these essential coastal habitats. Full article
Show Figures

Figure 1

Figure 1
<p>Study area and UAV-based visible image ((<b>A</b>): Yingluo Bay, (<b>B</b>): Pearl Bay).</p>
Full article ">Figure 2
<p>Workflow diagram illustrating the methodology of this study.</p>
Full article ">Figure 3
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by four learning models based on multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">Figure 4
<p>Mangrove species classification comparison of user’s and producer’s accuracies obtained by LightGBM learning model based on the multi- and hyper-spectral image in Pearl Bay.</p>
Full article ">Figure 5
<p>The mangrove species classification maps using four learning models (LightGBM, RF, XGBoost, and AdaBoost) based on UAV multispectral image (<b>a</b>–<b>d</b>) and hyperspectral image (<b>e</b>–<b>h</b>), respectively, in Yingluo Bay.</p>
Full article ">Figure 6
<p>The UAV visual image covering Yingluo Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results based on the LightGBM learning model.</p>
Full article ">Figure 7
<p>The mangrove species classification maps using the LightGBM learning model based on UAV multispectral image (<b>a</b>) and hyperspectral image (<b>b</b>) in Pearl Bay.</p>
Full article ">Figure 8
<p>The UAV visual image covering Pearl Bay and three subsets (<b>A</b>–<b>C</b>) of the UAV multispectral and hyperspectral image classification results using LightGBM learning model.</p>
Full article ">Figure A1
<p>Normalized confusion matrices of mangrove species classification using four learning models (AdaBoost, XGboost, RF, and LightGBM) based on UAV multi- and hyper-spectral images in Yingluo Bay.</p>
Full article ">
23 pages, 11067 KiB  
Article
A Down-Scaling Inversion Strategy for Retrieving Canopy Water Content from Satellite Hyperspectral Imagery
by Meihong Fang, Xiangyan Hu, Jing M. Chen, Xueshiyi Zhao, Xuguang Tang, Haijian Liu, Mingzhu Xu and Weimin Ju
Forests 2024, 15(8), 1463; https://doi.org/10.3390/f15081463 - 20 Aug 2024
Viewed by 438
Abstract
Vegetation canopy water content (CWC) crucially affects stomatal conductance and photosynthesis and, consequently, is a key state variable in advanced ecosystem models. Remote sensing has been shown to be an effective tool for retrieving CWCs. However, the retrieval of the CWC from satellite [...] Read more.
Vegetation canopy water content (CWC) crucially affects stomatal conductance and photosynthesis and, consequently, is a key state variable in advanced ecosystem models. Remote sensing has been shown to be an effective tool for retrieving CWCs. However, the retrieval of the CWC from satellite remote sensing data is affected by the vegetation canopy structure and soil background. This study proposes a methodology that combines a modified spectral down-scaling model with a high-universality leaf water content inversion model to retrieve the CWC through constraining the impacts of canopy structure and soil background on CWC retrieval. First, canopy spectra acquired by satellite sensors were down-scaled to leaf reflectance spectra according to the probabilities of viewing the sunlit foliage (PT) and background (PG) and the estimated spectral multiple scattering factor (M). Then, leaf water content, or equivalent water thickness (EWT), was obtained from the down-scaled leaf reflectance spectra via a leaf-scale EWT inversion model calibrated with PROSPECT simulation data. Finally, the CWC was calculated as the product of the estimated leaf EWT and canopy leaf area index. Validation of this coupled model was performed using satellite-ground synchronous observation data across various vegetation types within the study area, affirming the model’s broad applicability. Results indicate that the modified spectral down-scaling model accurately retrieves leaf reflectance spectra, aligning closely with site-level measured spectra. Compared to the direct inversion approach, which performs poorly with Hyperion satellite images, the down-scale strategy notably excels. Specifically, the Similarity Water Index (SWI)-based canopy EWT coupled model achieved the most precise estimation, with a normalized Root Mean Square Error (nRMSE) of 15.28% and an adjusted R2 of 0.77, surpassing the performance of the best index Shortwave Angle Normalized Index (SANI)-based model (nRMSE = 15.61%, adjusted R2 = 0.52). Given its calibration using simulated data, this coupled model proved to be a potent method for extracting canopy EWT from satellite imagery, suggesting its applicability to retrieve other vegetative biochemical components from satellite data. Full article
(This article belongs to the Section Forest Inventory, Modeling and Remote Sensing)
Show Figures

Figure 1

Figure 1
<p>Study area and sampling locations: (<b>a</b>) the study area was located in Menghai county (21.98° N, 100.29° E), Xishuangbanna, Yunnan province in southwest China. (<b>b</b>) The RGB true color image of Hyperion remote sensing data and (<b>c</b>) sampling points for ground synchronous observation.</p>
Full article ">Figure 2
<p>Workflow of the coupled down-scaling inversion strategy retrieving leaf-level and canopy-level water content from satellite data considering canopy structure and background effects.</p>
Full article ">Figure 3
<p>Sensitivity of the simulated probability of viewing sunlit foliage (PT) (<b>a</b>) and sunlit background (PG) (<b>b</b>) to the solar zenith Angle (SZA), LAI and radius of tree crowns. Here, the tree density was set at 4000 trees per hectare and VZA = 0°.</p>
Full article ">Figure 4
<p>Correlations of NSAI with PT (<b>a</b>) and PG (<b>b</b>) for the Hyperion synthetic data under conditions of normal soil (N, orange dots), dry soil (D, green dots), and wet soil (W, blue dots). A total of 13,950,144 Hyperion samples and 10,764 Hyperion scenes are available for analysis. On 33 Hyperion pixels, we compare PT (<b>c</b>) and PG (<b>d</b>) estimates using NSAI-based models to the reference values inverted with the 4-Scale GO model. The red straight lines are the 1:1 line. Correlations of NSAI with PT (<b>a</b>) and comparisons of PT estimated using NSAI with the reference values (<b>c</b>) are adapted from Fang et al. [<a href="#B25-forests-15-01463" class="html-bibr">25</a>].</p>
Full article ">Figure 5
<p>Spatial patterns of LAI (<b>a</b>) derived using MSR<sub>705</sub> and spatial distribution of PT (<b>b</b>) and PG (<b>c</b>) estimated using NASI retrieved from the Hyperion image over the study area at a spatial resolution of 30 m.</p>
Full article ">Figure 6
<p>Correlations between leaf EWT and SWI for the measured data (<b>a</b>) and the simulated data using the PROSPECT model (<b>b</b>). Validation of leaf EWT retrieved using the SWI-based model derived from measured data (<b>c</b>) and the simulated data (<b>d</b>). All leaf reflectance spectra were resampled to Hyperion-equivalent spectra.</p>
Full article ">Figure 7
<p>Spatial distribution of average leaf EWT inverted from coupled down-scaling inversion strategy (<b>a</b>) and canopy water content per unit ground surface area derived based on the LAI image and retrieved average leaf EWT (<b>b</b>). The spatial resolution of the image is 30 × 30 m.</p>
Full article ">Figure 8
<p>Comparison of measured CWC against those retrieved from SWI-based (<b>a</b>) and SANI-based (<b>b</b>) coupled models using the down-scaling inversion strategy.</p>
Full article ">Figure A1
<p>After preprocessing of the Hyperion image, spectra were compared between (<b>a</b>) vegetation with different crown closures (high, moderate, and low coverage) and (<b>b</b>) soil background with red and gray hues. Data were adapted from Fang et al. [<a href="#B25-forests-15-01463" class="html-bibr">25</a>].</p>
Full article ">Figure A2
<p>Correlations of the viewing probabilities of sunlit crown and background (PT and PG) based on the 4-Scale GO model simulations from the Hyperion synthetic data, which includes 10,764 scenes.</p>
Full article ">
20 pages, 7699 KiB  
Article
SSANet-BS: Spectral–Spatial Cross-Dimensional Attention Network for Hyperspectral Band Selection
by Chuanyu Cui, Xudong Sun, Baijia Fu and Xiaodi Shang
Remote Sens. 2024, 16(15), 2848; https://doi.org/10.3390/rs16152848 - 3 Aug 2024
Viewed by 559
Abstract
Band selection (BS) aims to reduce redundancy in hyperspectral imagery (HSI). Existing BS approaches typically model HSI only in a single dimension, either spectral or spatial, without exploring the interactions between different dimensions. To this end, we propose an unsupervised BS method based [...] Read more.
Band selection (BS) aims to reduce redundancy in hyperspectral imagery (HSI). Existing BS approaches typically model HSI only in a single dimension, either spectral or spatial, without exploring the interactions between different dimensions. To this end, we propose an unsupervised BS method based on a spectral–spatial cross-dimensional attention network, named SSANet-BS. This network is comprised of three stages: a band attention module (BAM) that employs an attention mechanism to adaptively identify and select highly significant bands; two parallel spectral–spatial attention modules (SSAMs), which fuse complex spectral–spatial structural information across dimensions in HSI; a multi-scale reconstruction network that learns spectral–spatial nonlinear dependencies in the SSAM-fusion image at various scales and guides the BAM weights to automatically converge to the target bands via backpropagation. The three-stage structure of SSANet-BS enables the BAM weights to fully represent the saliency of the bands, thereby valuable bands are obtained automatically. Experimental results on four real hyperspectral datasets demonstrate the effectiveness of SSANet-BS. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic diagrams of various model structures: (<b>a</b>) two-stage model; (<b>b</b>) three-stage model.</p>
Full article ">Figure 2
<p>Overall structure of SSANet-BS.</p>
Full article ">Figure 3
<p>Schematic diagram of the neural network in the BAM.</p>
Full article ">Figure 4
<p>The dataset used in the experiment. The land cover types and the number of samples for each dataset are indicated, respectively. (<b>a</b>) IP220. (<b>b</b>) DC191. (<b>c</b>) PU103. (<b>d</b>) QY176.</p>
Full article ">Figure 4 Cont.
<p>The dataset used in the experiment. The land cover types and the number of samples for each dataset are indicated, respectively. (<b>a</b>) IP220. (<b>b</b>) DC191. (<b>c</b>) PU103. (<b>d</b>) QY176.</p>
Full article ">Figure 5
<p>The OA values of SSANet-BS and comparison methods on four HSI datasets. (<b>a</b>) IP220. (<b>b</b>) DC191. (<b>c</b>) PU103. (<b>d</b>) QY176.</p>
Full article ">Figure 6
<p>Classification maps with 15 bands on the IP220 dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Full bands. (<b>d</b>) LLE. (<b>e</b>) Isomap. (<b>f</b>) MVPCA. (<b>g</b>) E-FDPC. (<b>h</b>) ASPS. (<b>i</b>) SOPSRL. (<b>j</b>) GRSC. (<b>k</b>) BSNet-Conv. (<b>l</b>) DARecNet-BS. (<b>m</b>)<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">S</mi> <mn>4</mn> </msup> <mi mathvariant="normal">P</mi> </mrow> </semantics></math>. (<b>n</b>) SSANet-BS.</p>
Full article ">Figure 7
<p>Classification maps with 15 bands on the DC191 dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Full Bands. (<b>d</b>) LLE. (<b>e</b>) Isomap. (<b>f</b>) MVPCA. (<b>g</b>) E-FDPC. (<b>h</b>) ASPS. (<b>i</b>) SOPSRL. (<b>j</b>) GRSC. (<b>k</b>) BSNet-Conv. (<b>l</b>) DARecNet-BS. (<b>m</b>)<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">S</mi> <mn>4</mn> </msup> <mi mathvariant="normal">P</mi> </mrow> </semantics></math>. (<b>n</b>) SSANet-BS.</p>
Full article ">Figure 8
<p>Classification maps with 15 bands on the PU103 dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Full Bands. (<b>d</b>) LLE. (<b>e</b>) Isomap. (<b>f</b>) MVPCA. (<b>g</b>) E-FDPC. (<b>h</b>) ASPS. (<b>i</b>) SOPSRL. (<b>j</b>) GRSC. (<b>k</b>) BSNet-Conv. (<b>l</b>) DARecNet-BS. (<b>m</b>)<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">S</mi> <mn>4</mn> </msup> <mi mathvariant="normal">P</mi> </mrow> </semantics></math>. (<b>n</b>) SSANet-BS.</p>
Full article ">Figure 9
<p>Classification maps with 15 bands on the QY176 dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Full Bands. (<b>d</b>) LLE. (<b>e</b>) Isomap. (<b>f</b>) MVPCA. (<b>g</b>) E-FDPC. (<b>h</b>) ASPS. (<b>i</b>) SOPSRL. (<b>j</b>) GRSC. (<b>k</b>) BSNet-Conv. (<b>l</b>) DARecNet-BS. (<b>m</b>)<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">S</mi> <mn>4</mn> </msup> <mi mathvariant="normal">P</mi> </mrow> </semantics></math>. (<b>n</b>) SSANet-BS.</p>
Full article ">Figure 9 Cont.
<p>Classification maps with 15 bands on the QY176 dataset. (<b>a</b>) False-color image. (<b>b</b>) Ground truth. (<b>c</b>) Full Bands. (<b>d</b>) LLE. (<b>e</b>) Isomap. (<b>f</b>) MVPCA. (<b>g</b>) E-FDPC. (<b>h</b>) ASPS. (<b>i</b>) SOPSRL. (<b>j</b>) GRSC. (<b>k</b>) BSNet-Conv. (<b>l</b>) DARecNet-BS. (<b>m</b>)<math display="inline"><semantics> <mrow> <msup> <mi mathvariant="normal">S</mi> <mn>4</mn> </msup> <mi mathvariant="normal">P</mi> </mrow> </semantics></math>. (<b>n</b>) SSANet-BS.</p>
Full article ">Figure 10
<p>The AOA values of SSANet-BS and comparison methods on four HSI datasets. (<b>a</b>) IP220. (<b>b</b>) DC191 (<b>c</b>) PU103. (<b>d</b>) QY176. The optimal and suboptimal results are bolded in red and black.</p>
Full article ">Figure 11
<p>Bands and entropy values of the IP220 dataset. (<b>a</b>) Band20 (7.16). (<b>b</b>) Band30 (7.25). (<b>c</b>) Band152 (4.83). (<b>d</b>) Band210 (6.61).</p>
Full article ">Figure 12
<p>The distribution of the 20 bands selected by each BS method (top) and the entropy of each band (bottom) for different dataset. (<b>a</b>) IP220. (<b>b</b>) DC191. (<b>c</b>) PU103. (<b>d</b>) QY176.</p>
Full article ">Figure 13
<p>The results of the ablation study for SSAMs on IP220 dataset. (<b>a</b>) OA values. (<b>b</b>) AOA values. The optimal and suboptimal results are bolded in red and black.</p>
Full article ">Figure 14
<p>The OA and AOA values of SSANet-BS, SSANet-BS-2S and SSANet-BS-2SE on IP220 dataset. (<b>a</b>) OA values. (<b>b</b>) AOA values. The optimal and suboptimal results are bolded in red and black.</p>
Full article ">
25 pages, 5283 KiB  
Article
Predicting Apple Tree Macronutrients Using Unmanned Aerial Vehicle-Based Hyperspectral Imagery to Manage Apple Orchard Nutrients
by Ye Seong Kang, Chan Seok Ryu, Jung Gun Cho and Ki Su Park
Drones 2024, 8(8), 369; https://doi.org/10.3390/drones8080369 - 1 Aug 2024
Viewed by 774
Abstract
Herein, the development of an estimation model to measure the chlorophyll (Ch) and macronutrients, such as the total nitrogen (T-N), phosphorus (P), potassium (K), carbon (C), calcium (Ca), and magnesium (Mg), in apples is detailed, using key band ratios selected from hyperspectral imagery [...] Read more.
Herein, the development of an estimation model to measure the chlorophyll (Ch) and macronutrients, such as the total nitrogen (T-N), phosphorus (P), potassium (K), carbon (C), calcium (Ca), and magnesium (Mg), in apples is detailed, using key band ratios selected from hyperspectral imagery acquired with an unmanned aerial vehicle, for the management of nutrients in an apple orchard. The k-nearest neighbors regression (KNR) model for Ch and all macronutrients was chosen as the best model through a comparison of calibration and validation R2 values. As a result of model development, a total of 13 band ratios (425/429, 682/686, 710/714, 714/718, 718/722, 750/754, 754/758, 758/762, 762/766, 894/898, 898/902, 906/911, and 963/967) were selected for Ch and all macronutrients. The estimation potential for the T-N and Mg concentrations was low, with an R2 ≤ 0.37. The estimation performance for the other macronutrients was as follows: R2 ≥ 0.70 and RMSE ≤ 1.43 μg/cm2 for Ch; R2 ≥ 0.44 and RMSE ≤ 0.04% for P; R2 ≥ 0.53 and RMSE ≤ 0.23% for K; R2 ≥ 0.85 and RMSE ≤ 6.18% for C; and R2 ≥ 0.42 and RMSE ≤ 0.25% for Ca. Through establishing a fertilization strategy using the macronutrients estimated through hyperspectral imagery and measured soil chemical properties, this study presents a nutrient management decision-making method for apple orchards. Full article
(This article belongs to the Special Issue Advances of UAV in Precision Agriculture)
Show Figures

Figure 1

Figure 1
<p>Experimental field containing the excessive (blue), moderate (yellow), and untreated (red) nitrogen fertilization groups.</p>
Full article ">Figure 2
<p>Variations in (<b>a</b>) average temperature and (<b>b</b>) accumulated precipitation over different months in the years 2021 and 2022.</p>
Full article ">Figure 3
<p>(<b>a</b>) Sample apple tree arrangement (green color). Hyperspectral image processing procedure: (<b>b</b>) acquisition of raw RGB image, conversion to the (<b>c</b>) normalized difference vegetation index, and (<b>d</b>) the extraction of individual canopies.</p>
Full article ">Figure 4
<p>Flowchart for estimating the chlorophyll and macronutrients in apple tree leaves using hyperspectral imagery.</p>
Full article ">Figure 5
<p>Reflectance curves of various band ratios for the apple tree canopy, obtained using hyperspectral imagery from (<b>a</b>) 2021 and (<b>b</b>) 2022.</p>
Full article ">Figure 6
<p>Linear relationships between measured values, using chemical analysis, and estimated values, using the <span class="html-italic">k</span>-nearest neighbors model with key band ratios: (<b>a</b>) chlorophyll; (<b>b</b>) total nitrogen; (<b>c</b>) phosphorus; (<b>d</b>) potassium; (<b>e</b>) carbon; (<b>f</b>) calcium; and (<b>g</b>) magnesium.</p>
Full article ">Figure 6 Cont.
<p>Linear relationships between measured values, using chemical analysis, and estimated values, using the <span class="html-italic">k</span>-nearest neighbors model with key band ratios: (<b>a</b>) chlorophyll; (<b>b</b>) total nitrogen; (<b>c</b>) phosphorus; (<b>d</b>) potassium; (<b>e</b>) carbon; (<b>f</b>) calcium; and (<b>g</b>) magnesium.</p>
Full article ">Figure 7
<p>Shapley additive explanation values depending on key band ratios used to estimate (<b>a</b>) chlorophyll and the macronutrients (<b>b</b>) total nitrogen, (<b>c</b>) phosphorus, (<b>d</b>) potassium, (<b>e</b>) carbon, (<b>f</b>) calcium, and (<b>g</b>) magnesium using the <span class="html-italic">k</span>-nearest neighbors model, highlighting the band ratio with the highest ranking (red color).</p>
Full article ">Figure 7 Cont.
<p>Shapley additive explanation values depending on key band ratios used to estimate (<b>a</b>) chlorophyll and the macronutrients (<b>b</b>) total nitrogen, (<b>c</b>) phosphorus, (<b>d</b>) potassium, (<b>e</b>) carbon, (<b>f</b>) calcium, and (<b>g</b>) magnesium using the <span class="html-italic">k</span>-nearest neighbors model, highlighting the band ratio with the highest ranking (red color).</p>
Full article ">
17 pages, 2648 KiB  
Article
Multi-Feature Cross Attention-Induced Transformer Network for Hyperspectral and LiDAR Data Classification
by Zirui Li, Runbang Liu, Le Sun and Yuhui Zheng
Remote Sens. 2024, 16(15), 2775; https://doi.org/10.3390/rs16152775 - 29 Jul 2024
Viewed by 689
Abstract
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed [...] Read more.
Transformers have shown remarkable success in modeling sequential data and capturing intricate patterns over long distances. Their self-attention mechanism allows for efficient parallel processing and scalability, making them well-suited for the high-dimensional data in hyperspectral and LiDAR imagery. However, further research is needed on how to more deeply integrate the features of two modalities in attention mechanisms. In this paper, we propose a novel Multi-Feature Cross Attention-Induced Transformer Network (MCAITN) designed to enhance the classification accuracy of hyperspectral and LiDAR data. The MCAITN integrates the strengths of both data modalities by leveraging a cross-attention mechanism that effectively captures the complementary information between hyperspectral and LiDAR features. By utilizing a transformer-based architecture, the network is capable of learning complex spatial-spectral relationships and long-range dependencies. The cross-attention module facilitates the fusion of multi-source data, improving the network’s ability to discriminate between different land cover types. Extensive experiments conducted on benchmark datasets demonstrate that the MCAITN outperforms state-of-the-art methods in terms of classification accuracy and robustness. Full article
Show Figures

Figure 1

Figure 1
<p>Overall architecture of MCAITN.</p>
Full article ">Figure 2
<p>MUUFL. (<b>a</b>) Pseudo-color image of HSI. (<b>b</b>) Gra-image of the LiDAR-based DSM. (<b>c</b>) Ground-truth map.</p>
Full article ">Figure 3
<p>Trento. (<b>a</b>) Pseudo-color image of HSI. (<b>b</b>) Gray image of the LiDAR-based DSM. (<b>c</b>) Ground-truth map.</p>
Full article ">Figure 4
<p>Augsburg. (<b>a</b>) Pseudo-color image of HSI. (<b>b</b>) Gray image of the LiDAR-based DSM. (<b>c</b>) Ground-truth map.</p>
Full article ">Figure 5
<p>Classification maps of the MUUFL dataset: (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) S2FL, (<b>d</b>) EndNet, (<b>e</b>) MDL, (<b>f</b>) LSAF, (<b>g</b>) CCRNet, (<b>h</b>) CoupledCNN, (<b>i</b>) HCT, and (<b>j</b>) MCAITN.</p>
Full article ">Figure 6
<p>Classification maps of the Augsburg dataset: (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) S2FL, (<b>d</b>) EndNet, (<b>e</b>) MDL, (<b>f</b>) LSAF, (<b>g</b>) CCRNet, (<b>h</b>) CoupledCNN, (<b>i</b>) HCT, and (<b>j</b>) MCAITN.</p>
Full article ">Figure 7
<p>Classification maps of the Trento dataset: (<b>a</b>) ground truth, (<b>b</b>) SVM, (<b>c</b>) S2FL, (<b>d</b>) EndNet, (<b>e</b>) MDL, (<b>f</b>) LSAF, (<b>g</b>) CCRNet, (<b>h</b>) CoupledCNN, (<b>i</b>) HCT, and (<b>j</b>) MCAITN.</p>
Full article ">Figure 8
<p>The impact of retained spectral dimensions on OA, AA, and the kappa coefficient. (<b>a</b>) MUUFL. (<b>b</b>) Augsburg. (<b>c</b>) Trento.</p>
Full article ">Figure 9
<p>The impact of the HSI patch size on OA, AA, and the kappa coefficient. (<b>a</b>) MUUFL. (<b>b</b>) Augsburg. (<b>c</b>) Trento.</p>
Full article ">Figure 10
<p>The impact of the LiDAR patch size on OA, AA, and the kappa coefficient. (<b>a</b>) MUUFL. (<b>b</b>) Augsburg. (<b>c</b>) Trento.</p>
Full article ">Figure 11
<p>The impact of the learning rate on OA, AA, and the kappa coefficient. (<b>a</b>) MUUFL. (<b>b</b>) Augsburg. (<b>c</b>) Trento.</p>
Full article ">
15 pages, 5569 KiB  
Article
Comparative Analysis of Machine Learning Techniques Using RGB Imaging for Nitrogen Stress Detection in Maize
by Sumaira Ghazal, Namratha Kommineni and Arslan Munir
AI 2024, 5(3), 1286-1300; https://doi.org/10.3390/ai5030062 - 28 Jul 2024
Viewed by 998
Abstract
Proper nitrogen management in crops is crucial to ensure optimal growth and yield maximization. While hyperspectral imagery is often used for nitrogen status estimation in crops, it is not feasible for real-time applications due to the complexity and high cost associated with it. [...] Read more.
Proper nitrogen management in crops is crucial to ensure optimal growth and yield maximization. While hyperspectral imagery is often used for nitrogen status estimation in crops, it is not feasible for real-time applications due to the complexity and high cost associated with it. Much of the research utilizing RGB data for detecting nitrogen stress in plants relies on datasets obtained under laboratory settings, which limits its usability in practical applications. This study focuses on identifying nitrogen deficiency in maize crops using RGB imaging data from a publicly available dataset obtained under field conditions. We have proposed a custom-built vision transformer model for the classification of maize into three stress classes. Additionally, we have analyzed the performance of convolutional neural network models, including ResNet50, EfficientNetB0, InceptionV3, and DenseNet121, for nitrogen stress estimation. Our approach involves transfer learning with fine-tuning, adding layers tailored to our specific application. Our detailed analysis shows that while vision transformer models generalize well, they converge prematurely with a higher loss value, indicating the need for further optimization. In contrast, the fine-tuned CNN models classify the crop into stressed, non-stressed, and semi-stressed classes with higher accuracy, achieving a maximum accuracy of 97% with EfficientNetB0 as the base model. This makes our fine-tuned EfficientNetB0 model a suitable candidate for practical applications in nitrogen stress detection. Full article
(This article belongs to the Special Issue Artificial Intelligence in Agriculture)
Show Figures

Figure 1

Figure 1
<p>Flowchart of proposed methodology for nitrogen stress detection.</p>
Full article ">Figure 2
<p>Samples of maize images: (<b>a</b>) sample image from class N0 with no fertilization, (<b>b</b>) sample image from class N75 with 75 kg of fertilizer applied, (<b>c</b>) sample image from class NFull with 136 kg of fertilizer applied, (<b>d</b>–<b>f</b>) same images as (<b>a</b>–<b>c</b>) after segmentation.</p>
Full article ">Figure 3
<p>Architecture of our custom-built vision transformer model to classify three levels of nitrogen fertilization (i.e., N0, N75, and NFull).</p>
Full article ">Figure 4
<p>Architecture of the fine-tuned vision transformer model to classify three levels of nitrogen fertilization (i.e., N0, N75, and NFull).</p>
Full article ">Figure 5
<p>Architecture of fine-tuned CNN-based model to classify three levels of nitrogen fertilization (i.e., N0, N75, and NFull).</p>
Full article ">Figure 6
<p>Custom-built vision transformer model results for train and validation losses for two image resolutions (100 × 100 and 224 × 224) for nitrogen fertilization-level prediction.</p>
Full article ">Figure 7
<p>Fine-tuned vision transformer model results for train and validation losses for image resolution 224 × 224 for nitrogen fertilization-level prediction.</p>
Full article ">Figure 8
<p>Fine-tuned CNN best model results for train and validation losses for two image resolutions for nitrogen fertilization-level prediction.</p>
Full article ">Figure 9
<p>Classification results for individual fertilization-level classes for image resolution 224 × 224.</p>
Full article ">
19 pages, 2488 KiB  
Article
Predicting Grapevine Physiological Parameters Using Hyperspectral Remote Sensing Integrated with Hybrid Convolutional Neural Network and Ensemble Stacked Regression
by Prakriti Sharma, Roberto Villegas-Diaz and Anne Fennell
Remote Sens. 2024, 16(14), 2626; https://doi.org/10.3390/rs16142626 - 18 Jul 2024
Viewed by 679
Abstract
Grapevine rootstocks are gaining importance in viticulture as a strategy to combat abiotic challenges, as well as enhance scion physiology. Direct leaf-level physiological parameters like net assimilation rate, stomatal conductance to water vapor, quantum yield of PSII, and transpiration can illuminate the rootstock [...] Read more.
Grapevine rootstocks are gaining importance in viticulture as a strategy to combat abiotic challenges, as well as enhance scion physiology. Direct leaf-level physiological parameters like net assimilation rate, stomatal conductance to water vapor, quantum yield of PSII, and transpiration can illuminate the rootstock effect on scion physiology. However, these measures are time-consuming and limited to leaf-level analysis. This study used different rootstocks to investigate the potential application of aerial hyperspectral imagery in the estimation of canopy level measurements. A statistical framework was developed as an ensemble stacked regression (REGST) that aggregated five different individual machine learning algorithms: Least absolute shrinkage and selection operator (Lasso), Partial least squares regression (PLSR), Ridge regression (RR), Elastic net (ENET), and Principal component regression (PCR) to optimize high-throughput assessment of vine physiology. In addition, a Convolutional Neural Network (CNN) algorithm was integrated into an existing REGST, forming a hybrid CNN-REGST model with the aim of capturing patterns from the hyperspectral signal. Based on the findings, the performance of individual base models exhibited variable prediction accuracies. In most cases, Ridge Regression (RR) demonstrated the lowest test Root Mean Squared Error (RMSE). The ensemble stacked regression model (REGST) outperformed the individual machine learning algorithms with an increase in R2 by (0.03 to 0.1). The performances of CNN-REGST and REGST were similar in estimating the four different traits. Overall, these models were able to explain approximately 55–67% of the variation in the actual ground-truth data. This study suggests that hyperspectral features integrated with powerful AI approaches show great potential in tracing functional traits in grapevines. Full article
Show Figures

Figure 1

Figure 1
<p>(<b>A</b>) Fully developed, sun-exposed leaves measured by LiCOR 6800 as ground-truth or directly measured variables. (<b>B</b>) UAV platform (DJI Matrice 600 drone) for indirect measurement of physiological traits through aerial remote sensing. (<b>C</b>) Nano hyperspectral headwall sensor fixated in gimbal carried by UAV.</p>
Full article ">Figure 2
<p>Workflow designed for (<b>i</b>) 1D CNN model architecture. The model consists of 3 different convolutional layers and two max-pooling layers that are flattened and forwarded to a dense layer to estimate the final output. (<b>ii</b>) Ensemble stacked regression model (REGST), where partial least squares regression (PLSR), Least Absolute Shrinkage and Selection Operator (LASSO), ridge regression (RR), elastic net (ENET), and principal component regression (PCR) are used as base models, and random forest regression (RF) is used as a meta-model to make final predictions on different physiological parameters. (<b>iii</b>) Proposed hybrid CNN-REGST; the model was designed using CNN for feature extraction and REGST for the regression task. The initial CNN operates on one-dimensional hyperspectral data, and the flattened layer obtained is forwarded as input for the REGST model to make predictions for different physiological traits.</p>
Full article ">Figure 3
<p>Model pipeline with cross-validation approach for predicting net assimilation rate, stomatal conductance to water vapor, quantum yield of PSII, and transpiration.</p>
Full article ">Figure 4
<p>Distribution of physiological measures in scion as conferred by six different rootstocks. A = net assimilation rate, µmol m<sup>−2</sup> s<sup>−1</sup>; gsw = stomatal conductance to water vapor, mol m<sup>−2</sup> s<sup>−1</sup>; PSII, effective quantum yield of PSII in light adaptive stage, E = transpiration rate, mmol m<sup>−2</sup> s<sup>−1</sup> recorded in 2021 and 2022.</p>
Full article ">Figure 5
<p>Relationship between hyperspectral data and various physiological traits. Correlation coefficient (r) for net carbon assimilation (A, green) µmol m<sup>−2</sup> s<sup>−1</sup> ; transpiration (E, orange) mmol m<sup>−2</sup> s<sup>−1</sup>; stomatal conductance to water vapor (gsw, blue) mol m<sup>−2</sup> s<sup>−1</sup>; and quantum yield of PSII (ϕ PSII, magenta).</p>
Full article ">Figure 6
<p>Feature importance score assigned for each wavelength by the REGST model. Net carbon assimilation rate, µmol m<sup>−2</sup> s<sup>−1</sup> (A, green); stomatal conductance to water vapor, mol m<sup>−2</sup> s<sup>−1</sup> (gsw, purple); quantum yield of PSII, (ϕ PSII, mauve), and transpiration, mmol m<sup>−2</sup> s<sup>−1</sup> (E, blue).</p>
Full article ">
16 pages, 3209 KiB  
Article
Ensemble Band Selection for Quantification of Soil Total Nitrogen Levels from Hyperspectral Imagery
by Khalil Misbah, Ahmed Laamrani, Paul Voroney, Keltoum Khechba, Raffaele Casa and Abdelghani Chehbouni
Remote Sens. 2024, 16(14), 2549; https://doi.org/10.3390/rs16142549 - 11 Jul 2024
Viewed by 709
Abstract
Total nitrogen (TN) is a critical nutrient for plant growth, and its monitoring in agricultural soil is vital for farm managers. Traditional methods of estimating soil TN levels involve laborious and costly chemical analyses, especially when applied to large areas with multiple sampling [...] Read more.
Total nitrogen (TN) is a critical nutrient for plant growth, and its monitoring in agricultural soil is vital for farm managers. Traditional methods of estimating soil TN levels involve laborious and costly chemical analyses, especially when applied to large areas with multiple sampling points. Remote sensing offers a promising alternative for identifying, tracking, and mapping soil TN levels at various scales, including the field, landscape, and regional levels. Spaceborne hyperspectral sensing has shown effectiveness in reflecting soil TN levels. This study evaluates the efficiency of spectral reflectance at visible near-infrared (VNIR) and shortwave near-infrared (SWIR) regions to identify the most informative hyperspectral bands responding to the TN content in agricultural soil. In this context, we used PRISMA (PRecursore IperSpettrale della Missione Applicativa) hyperspectral imagery with ensemble learning modeling to identify N-specific absorption features. This ensemble consisted of three multivariate regression techniques, partial least square (PLSR), support vector regression (SVR), and Gaussian process regression (GPR) learners. The soil TN data (n = 803) were analyzed against a hyperspectral PRISMA imagery to perform spectral band selection. The 803 sampled data points were derived from open-access soil property and nutrient maps for Africa at a 30 m resolution over a bare agricultural field in southern Morocco. The ensemble learning strategy identified several bands in the SWIR in the regions of 900–1300 nm and 1900–2200 nm. The models achieved coefficient-of-determination values ranging from 0.63 to 0.73 and root-mean-square error values of 0.14 g/kg for PLSR, 0.11 g/kg for SVR, and 0.12 g/kg for GPR, which had been boosted to an R2 of 0.84, an RMSE of 0.08 g/kg, and an RPD of 2.53 by the ensemble, demonstrating the model’s accuracy in predicting the soil TN content. These results underscore the potential for using spaceborne hyperspectral imagery for soil TN estimation, enabling the development of decision-support tools for variable-rate fertilization and advancing our understanding of soil spectral responses for improved soil management. Full article
Show Figures

Figure 1

Figure 1
<p>Location of the study sites in southern Morocco within the Haouz plain. PRISMA images used to cover the three investigated areas: (<b>A</b>) Mejjat; (<b>B</b>) West N’fis; and (<b>C</b>) Central Haouz.</p>
Full article ">Figure 2
<p>Summary of descriptive statistics for extracted soil TN content within the Haouz plain: n is the number of samples, Min and Max represent the minimal and the maximal value of the sample, respectively, and SD its standard deviation.</p>
Full article ">Figure 3
<p>Soil sample spectra representing the mean TN content after adjustments over the research area. The dotted lines represent the hyperspectral dataset’s variation. The significant signal absorption by the atmosphere in two locations (1338–1449 nm and 1793–1993 nm) owes to the existence of moisture vapor in those specific spectral regions.</p>
Full article ">Figure 4
<p>Diagram illustrating the datasets that were used and the ensemble method that was used to create the relationship between the soil TN dataset and the PRISMA hyperspectral photography dataset. Three learners made up the ensemble method: PLSR (partial least square regression), SVR (support vector regression), and GPR (Gaussian process regression).</p>
Full article ">Figure 5
<p>The coefficients of PLS, SVR, and GPR that were used as measurements of band relevance within the ensemble. Importance was determined by the coefficients of PLS, SVR, and GPR for the relationship between topsoil reflectance and soil TN. The black–white gradients depict the relative significance of each spectral band, while the respective selected bands are marked in yellow.</p>
Full article ">Figure 6
<p>Scatter plot of measured versus predicted soil TN levels using PLSR, SVR, and GPR models in the multimethod ensemble. This plot represents data that resulted in one-fold from a 10-fold cross-validation, with the black dashed line indicating the ideal 1:1 relationship between measured and predicted values.</p>
Full article ">Figure 7
<p>Relationship between the percentage of the total nitrogen (TN) content (g/kg) and soil organic matter (SOM) in soil samples. The density of the points is indicated by the color intensity, with darker shades of blue representing higher densities of observations. A strong positive correlation is observed, as illustrated by the trend line, suggesting that higher levels of organic matter are associated with an increased total nitrogen content.</p>
Full article ">
13 pages, 2277 KiB  
Technical Note
Early Radiometric Assessment of NOAA-21 Visible Infrared Imaging Radiometer Suite Reflective Solar Bands Using Vicarious Techniques
by Aisheng Wu, Xiaoxiong Xiong, Qiaozhen Mu, Amit Angal, Rajendra Bhatt and Yolanda Shea
Remote Sens. 2024, 16(14), 2528; https://doi.org/10.3390/rs16142528 - 10 Jul 2024
Viewed by 505
Abstract
The VIIRS instrument on the JPSS-2 (renamed NOAA-21 upon reaching orbit) spacecraft was launched in November 2022, making it the third sensor in the VIIRS series, following those onboard the SNPP and NOAA-20 spacecrafts, which are operating nominally. As a multi-disciplinary instrument, the [...] Read more.
The VIIRS instrument on the JPSS-2 (renamed NOAA-21 upon reaching orbit) spacecraft was launched in November 2022, making it the third sensor in the VIIRS series, following those onboard the SNPP and NOAA-20 spacecrafts, which are operating nominally. As a multi-disciplinary instrument, the VIIRS provides the worldwide user community with high-quality imagery and radiometric measurements of the land, atmosphere, cryosphere, and oceans. This study provides an early assessment of the calibration stability and radiometric consistency of the NOAA-21 VIIRS RSBs using the latest NASA SIPS C2 L1B products. Vicarious approaches are employed, relying on reflectance data obtained from the Libya-4 desert and Dome C sites, deep convective clouds, and simultaneous nadir overpasses, as well as intercomparison with the first two VIIRS instruments using MODIS as a transfer radiometer. The impact of existing band spectral differences on sensor-to-sensor comparison is corrected using scene-specific a priori hyperspectral observations from the SCIAMACHY sensor onboard the ENVISAT platform. The results indicate that the overall radiometric performance of the newly launched NOAA-21 VIIRS is quantitatively comparable to the NOAA-20 for the VIS and NIR bands. For some SWIR bands, the measured reflectances are lower by more than 2%. An upward adjustment of 6.1% in the gain of band M11 (2.25 µm), based on lunar intercomparison results, generates more consistent results with the NOAA-20 VIIRS. Full article
(This article belongs to the Section Satellite Missions for Earth and Planetary Exploration)
Show Figures

Figure 1

Figure 1
<p>Trends of the BRDF normalized reflectances before SBAF correction for SNPP and NOAA-20 and -21 VIIRS M1, M3, M7, and M10 over the Libya-4 site.</p>
Full article ">Figure 2
<p>Trends of the reflectances normalized by the mean of SNPP before SBAF correction for SNPP and NOAA-20 and -21 VIIRS M1, M3, M7, and M8 from DCCs.</p>
Full article ">Figure 3
<p>Reflectances as a function of solar zenith angle for SNPP and NOAA-20 and -21 VIIRS M1, M3, I1, and I2 over the Dome C site.</p>
Full article ">Figure 4
<p>VIIRS-to-MODIS reflectance ratios versus reflectance for SNPP and NOAA-20 and -21 VIIRS M2, M4, M7, and M10 from SNOs. Ratios at the pixel level are binned at a reflectance interval of 0.02.</p>
Full article ">Figure 5
<p>Following the establishment of the absolute calibration of NOAA-20 VIIRS via CPF, it can serve as a bridge between CPF and other reflective solar instruments.</p>
Full article ">
20 pages, 5518 KiB  
Article
Hierarchical Prototype-Aligned Graph Neural Network for Cross-Scene Hyperspectral Image Classification
by Danyao Shen, Haojie Hu, Fang He, Fenggan Zhang, Jianwei Zhao and Xiaowei Shen
Remote Sens. 2024, 16(13), 2464; https://doi.org/10.3390/rs16132464 - 5 Jul 2024
Viewed by 629
Abstract
The objective of cross-scene hyperspectral image (HSI) classification is to develop models capable of adapting to the “domain gap” that exists between different scenes, enabling accurate object classification in previously unseen scenes. Many researchers have devised various domain adaptation techniques aimed at aligning [...] Read more.
The objective of cross-scene hyperspectral image (HSI) classification is to develop models capable of adapting to the “domain gap” that exists between different scenes, enabling accurate object classification in previously unseen scenes. Many researchers have devised various domain adaptation techniques aimed at aligning the statistical or spectral distributions of data from diverse scenes. However, many previous studies have overlooked the potential benefits of incorporating spatial topological information from hyperspectral imagery, which could provide a more accurate representation of the inherent data structure in HSIs. To overcome this issue, we introduce an innovative approach for cross-scene HSI classification, founded on hierarchical prototype graph alignment. Specifically, this method leverages prototypes as representative embedded representations of all samples within the same class. By employing multiple graph convolution and pooling operations, multi-scale domain alignment is attained. Beyond statistical distribution alignment, we integrate graph matching to effectively reconcile semantic and topological information. Experimental results on several datasets achieve significantly improved accuracy and generalization capabilities for cross-scene HSI classification tasks. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Illustration of joint distribution alignment and topological structure alignment for cross-scene learning.</p>
Full article ">Figure 2
<p>Calculation flowchart of the GOT distance.</p>
Full article ">Figure 3
<p>Flowchart of the proposed HPGA.</p>
Full article ">Figure 4
<p>Pseudocolor image and ground truth map of Houston. (<b>a</b>) Pseudocolor image of Houston 2013. (<b>b</b>) Ground truth map of Houston 2013. (<b>c</b>) Pseudocolor image of Houston 2018. (<b>d</b>) Ground truth map of Houston 2018.</p>
Full article ">Figure 5
<p>Pseudocolor image and ground truth map of HyRANK. (<b>a</b>) Pseudocolor image of Dioni. (<b>b</b>) Ground truth map of Dioni. (<b>c</b>) Pseudocolor image of Loukia. (<b>d</b>) Ground truth map of Loukia.</p>
Full article ">Figure 6
<p>Classification result maps for the target scene Houston2018. (<b>a</b>) False color image. (<b>b</b>) Ground truth. (<b>c</b>) DAN. (<b>d</b>) DAAN. (<b>e</b>) MRAN. (<b>f</b>) DSAN. (<b>g</b>) HTCNN. (<b>h</b>) BCAN. (<b>i</b>) HPGA.</p>
Full article ">Figure 7
<p>Classification result maps for the target scene HyRANK. (<b>a</b>) False color image. (<b>b</b>) Ground truth. (<b>c</b>) DAN. (<b>d</b>) DAAN. (<b>e</b>) MRAN. (<b>f</b>) DSAN. (<b>g</b>) HTCNN. (<b>h</b>) BCAN. (<b>i</b>) HPGA.</p>
Full article ">Figure 8
<p>Comparison of 2D visualization of domain adaptive pre- and post-features on the Houston dataset.</p>
Full article ">Figure 9
<p>The impact of parameters <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>λ</mi> <mn>2</mn> </msub> </semantics></math> on classification results.</p>
Full article ">
24 pages, 21129 KiB  
Article
Early-Season Crop Mapping by PRISMA Images Using Machine/Deep Learning Approaches: Italy and Iran Test Cases
by Saham Mirzaei, Simone Pascucci, Maria Francesca Carfora, Raffaele Casa, Francesco Rossi, Federico Santini, Angelo Palombo, Giovanni Laneve and Stefano Pignatti
Remote Sens. 2024, 16(13), 2431; https://doi.org/10.3390/rs16132431 - 2 Jul 2024
Viewed by 1057
Abstract
Despite its high importance for crop yield prediction and monitoring, early-season crop mapping is severely hampered by the absence of timely ground truth. To cope with this issue, this study aims at evaluating the capability of PRISMA hyperspectral satellite images compared with Sentinel-2 [...] Read more.
Despite its high importance for crop yield prediction and monitoring, early-season crop mapping is severely hampered by the absence of timely ground truth. To cope with this issue, this study aims at evaluating the capability of PRISMA hyperspectral satellite images compared with Sentinel-2 multispectral imagery to produce early- and in-season crop maps using consolidated machine and deep learning algorithms. Results show that the accuracy of crop type classification using Sentinel-2 images is meaningfully poor compared with PRISMA (14% in overall accuracy (OA)). The 1D-CNN algorithm, with 89%, 91%, and 92% OA for winter, summer, and perennial cultivations, respectively, shows for the PRISMA images the highest accuracy in the in-season crop mapping and the fastest algorithm that achieves acceptable accuracy (OA 80%) for the winter, summer, and perennial cultivations early-season mapping using PRISMA images. Moreover, the 1D-CNN algorithm shows a limited reduction (6%) in performance, appearing to be the best algorithm for crop mapping within operational use in cross-farm applications. Machine/deep learning classification algorithms applied on the test fields cross-scene demonstrate that PRISMA hyperspectral time series images can provide good results for early- and in-season crop mapping. Full article
Show Figures

Graphical abstract

Graphical abstract
Full article ">Figure 1
<p>Location of study areas in (<b>a</b>) Italy and (<b>b</b>) Iran, and field boundary of (<b>c</b>) Jolanda di Savoia, northeast Italy, (<b>d</b>) Maccarese, central Italy, (<b>e</b>) Grosseto, central Italy, and (<b>f</b>) MKK southwest Iran.</p>
Full article ">Figure 2
<p>Flowchart of crop mapping by Sentinel-2 and PRISMA satellite images via ML/DL method.</p>
Full article ">Figure 3
<p>Field photos for maize and wheat at different phenological stages in the Maccarese and Jolanda di Savoia farms of Italy.</p>
Full article ">Figure 4
<p>Architecture of the 1D-CNN (<b>top</b>) 3D-CNN (<b>bottom</b>).</p>
Full article ">Figure 5
<p>Plant simulated phenology derived from Sentinel-2 NDVI time series data of the (<b>a</b>) winter and durum wheat, barley, triticale, herbage, fava bean, and pea species; (<b>b</b>) maize, sorghum, rice, soybean, sunflower, and tomato species; (<b>c</b>) alfalfa, olive, almond, apple, pear, and cardoon species from the 2023 season.</p>
Full article ">Figure 6
<p>PRISMA-derived spectral signature of (<b>a</b>) winter and durum wheat, barley, and triticale acquired on 1 April 2021 at Maccarese farm; (<b>b</b>) herbage and fava bean acquired on 1 April 2021 at Maccarese farm and pea acquired on 24 April 2021 at Jolanda di Savoia farm; (<b>c</b>) maize, sorghum, rice, soybean, sunflower, and tomato species acquired on 3 June 2022 at Jolanda di Savoia farm; (<b>d</b>) olive, almond, and alfalfa acquired on 17 May 2021 at Maccarese farm and pear and apple acquired on 4 June 2021 at Jolanda di Savoia farm; and (<b>e</b>) similarity between the spectra of different species’ PRISMA image from the four selected test sites.</p>
Full article ">Figure 7
<p>PRISMA-derived spectral signature of (<b>a</b>) winter and durum wheat species and (<b>b</b>) 3 different fields of alfalfa at Jolanda di Savoia farm in the acquired image from 3 July 2022.</p>
Full article ">Figure 8
<p>Per-species OA curves of early-season crop mapping for the Mediterranean crop calendar of (<b>a</b>) winter, (<b>b</b>) summer, and (<b>c</b>) perennial cultivations using 1D-CNN algorithm and PRISMA images. The vertical dashed red line represents the DOY when the OA first reached 80% or higher.</p>
Full article ">Figure 9
<p>Overall accuracy (%) of the MNB, KNN, SVM, RF, 1D-CNN, and 3D-CNN classification methods for (<b>a</b>) same-farm TR/ACC-PRISMA, (<b>b</b>) cross-farm TR/ACC-PRISMA, (<b>c</b>) same-farm TR/ACC-Sentinel-2, and (<b>d</b>) cross-farm TR/ACC-Sentinel-2 accuracy assessment for all the selected sites.</p>
Full article ">Figure 10
<p>Per-species classification (<b>a</b>) PA and (<b>b</b>) UA derived from PRISMA and Sentinel-2 images for the MNB, KNN, SVM, RF, 1D-CNN, and 3D-CNN classification methods for winter, summer seasons, and perennial cultivations for all the selected sites.</p>
Full article ">Figure 11
<p>Ground truth of (<b>a</b>) Jolanda di Savoia, (<b>b</b>) Maccarese, (<b>c</b>) MKK, and (<b>d</b>) Grosseto. The crop type maps for the entire growing season produced from PRISMA image using (<b>e</b>) 3D-CNN of Jolanda di Savoia farm for 2022, (<b>f</b>) 1D-CNN method of Maccarese farm for 2021, (<b>g</b>) 3D-CNN of MKK farm for 2022, and (<b>h</b>) 1D-CNN method of Grosseto farm for 2020.</p>
Full article ">Figure 12
<p>(<b>a</b>) Ground truth, (<b>b</b>) classification result of a non-homogeneous field at Jolanda di Savoia farm with high misclassification in a field cultivated with durum wheat, (<b>c</b>) PRISMA RGB (bands 32-21-10) acquired on 30 April 2022, (<b>d</b>) spectral reflectance of sparse and dense wheat.</p>
Full article ">Figure 13
<p>Kappa coefficient of small (&lt;150 m), moderate (150–250 m), and large (&gt;250 m) fields provided by PRISMA and Sentinel-2 images.</p>
Full article ">
Back to TopTop